00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2362 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3623 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.110 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.194 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.207 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.220 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.220 > git config core.sparsecheckout # timeout=10 00:00:05.230 > git read-tree -mu HEAD # timeout=10 00:00:05.245 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.264 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.265 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.343 [Pipeline] Start of Pipeline 00:00:05.355 [Pipeline] library 00:00:05.357 Loading library shm_lib@master 00:00:05.357 Library shm_lib@master is cached. Copying from home. 00:00:05.373 [Pipeline] node 00:00:05.399 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.401 [Pipeline] { 00:00:05.411 [Pipeline] catchError 00:00:05.412 [Pipeline] { 00:00:05.422 [Pipeline] wrap 00:00:05.429 [Pipeline] { 00:00:05.436 [Pipeline] stage 00:00:05.437 [Pipeline] { (Prologue) 00:00:05.619 [Pipeline] sh 00:00:05.900 + logger -p user.info -t JENKINS-CI 00:00:05.918 [Pipeline] echo 00:00:05.920 Node: WFP21 00:00:05.928 [Pipeline] sh 00:00:06.226 [Pipeline] setCustomBuildProperty 00:00:06.239 [Pipeline] echo 00:00:06.240 Cleanup processes 00:00:06.246 [Pipeline] sh 00:00:06.530 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.530 2421820 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.541 [Pipeline] sh 00:00:06.822 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.822 ++ grep -v 'sudo pgrep' 00:00:06.822 ++ awk '{print $1}' 00:00:06.822 + sudo kill -9 00:00:06.822 + true 00:00:06.834 [Pipeline] cleanWs 00:00:06.841 [WS-CLEANUP] Deleting project workspace... 00:00:06.841 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.847 [WS-CLEANUP] done 00:00:06.851 [Pipeline] setCustomBuildProperty 00:00:06.870 [Pipeline] sh 00:00:07.154 + sudo git config --global --replace-all safe.directory '*' 00:00:07.225 [Pipeline] httpRequest 00:00:07.615 [Pipeline] echo 00:00:07.617 Sorcerer 10.211.164.101 is alive 00:00:07.624 [Pipeline] retry 00:00:07.626 [Pipeline] { 00:00:07.638 [Pipeline] httpRequest 00:00:07.642 HttpMethod: GET 00:00:07.642 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.642 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.644 Response Code: HTTP/1.1 200 OK 00:00:07.645 Success: Status code 200 is in the accepted range: 200,404 00:00:07.645 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.732 [Pipeline] } 00:00:08.749 [Pipeline] // retry 00:00:08.755 [Pipeline] sh 00:00:09.036 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.048 [Pipeline] httpRequest 00:00:09.386 [Pipeline] echo 00:00:09.388 Sorcerer 10.211.164.101 is alive 00:00:09.397 [Pipeline] retry 00:00:09.399 [Pipeline] { 00:00:09.412 [Pipeline] httpRequest 00:00:09.417 HttpMethod: GET 00:00:09.417 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.417 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.434 Response Code: HTTP/1.1 200 OK 00:00:09.434 Success: Status code 200 is in the accepted range: 200,404 00:00:09.435 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:07.261 [Pipeline] } 00:01:07.278 [Pipeline] // retry 00:01:07.286 [Pipeline] sh 00:01:07.576 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:10.130 [Pipeline] sh 00:01:10.416 + git -C spdk log --oneline -n5 00:01:10.416 c13c99a5e test: Various fixes for Fedora40 00:01:10.416 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:10.416 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:10.416 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:10.416 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:10.428 [Pipeline] } 00:01:10.441 [Pipeline] // stage 00:01:10.449 [Pipeline] stage 00:01:10.451 [Pipeline] { (Prepare) 00:01:10.467 [Pipeline] writeFile 00:01:10.482 [Pipeline] sh 00:01:10.767 + logger -p user.info -t JENKINS-CI 00:01:10.780 [Pipeline] sh 00:01:11.092 + logger -p user.info -t JENKINS-CI 00:01:11.103 [Pipeline] sh 00:01:11.383 + cat autorun-spdk.conf 00:01:11.383 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.383 SPDK_TEST_NVMF=1 00:01:11.383 SPDK_TEST_NVME_CLI=1 00:01:11.383 SPDK_TEST_NVMF_NICS=mlx5 00:01:11.383 SPDK_RUN_UBSAN=1 00:01:11.383 NET_TYPE=phy 00:01:11.389 RUN_NIGHTLY=1 00:01:11.394 [Pipeline] readFile 00:01:11.417 [Pipeline] withEnv 00:01:11.419 [Pipeline] { 00:01:11.431 [Pipeline] sh 00:01:11.711 + set -ex 00:01:11.711 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:11.711 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:11.711 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.711 ++ SPDK_TEST_NVMF=1 00:01:11.711 ++ SPDK_TEST_NVME_CLI=1 00:01:11.711 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:11.711 ++ SPDK_RUN_UBSAN=1 00:01:11.711 ++ NET_TYPE=phy 00:01:11.711 ++ RUN_NIGHTLY=1 00:01:11.711 + case $SPDK_TEST_NVMF_NICS in 00:01:11.711 + DRIVERS=mlx5_ib 00:01:11.711 + [[ -n mlx5_ib ]] 00:01:11.711 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.711 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.275 rmmod: ERROR: Module irdma is not currently loaded 00:01:18.275 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.275 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.275 + true 00:01:18.275 + for D in $DRIVERS 00:01:18.275 + sudo modprobe mlx5_ib 00:01:18.275 + exit 0 00:01:18.283 [Pipeline] } 00:01:18.298 [Pipeline] // withEnv 00:01:18.303 [Pipeline] } 00:01:18.316 [Pipeline] // stage 00:01:18.325 [Pipeline] catchError 00:01:18.327 [Pipeline] { 00:01:18.339 [Pipeline] timeout 00:01:18.339 Timeout set to expire in 1 hr 0 min 00:01:18.341 [Pipeline] { 00:01:18.354 [Pipeline] stage 00:01:18.356 [Pipeline] { (Tests) 00:01:18.370 [Pipeline] sh 00:01:18.651 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.651 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.651 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:18.651 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:18.651 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:18.651 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:18.651 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:18.651 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:18.651 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:18.651 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:18.651 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:18.651 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.651 + source /etc/os-release 00:01:18.651 ++ NAME='Fedora Linux' 00:01:18.651 ++ VERSION='39 (Cloud Edition)' 00:01:18.651 ++ ID=fedora 00:01:18.651 ++ VERSION_ID=39 00:01:18.651 ++ VERSION_CODENAME= 00:01:18.651 ++ PLATFORM_ID=platform:f39 00:01:18.651 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.651 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.651 ++ LOGO=fedora-logo-icon 00:01:18.651 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.651 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.651 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.651 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.651 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.651 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.651 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.651 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.651 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.651 ++ SUPPORT_END=2024-11-12 00:01:18.651 ++ VARIANT='Cloud Edition' 00:01:18.651 ++ VARIANT_ID=cloud 00:01:18.651 + uname -a 00:01:18.651 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.651 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:21.937 Hugepages 00:01:21.937 node hugesize free / total 00:01:21.937 node0 1048576kB 0 / 0 00:01:21.937 node0 2048kB 0 / 0 00:01:21.937 node1 1048576kB 0 / 0 00:01:21.937 node1 2048kB 0 / 0 00:01:21.937 00:01:21.937 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.937 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:21.937 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:21.937 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:21.937 + rm -f /tmp/spdk-ld-path 00:01:21.937 + source autorun-spdk.conf 00:01:21.937 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.937 ++ SPDK_TEST_NVMF=1 00:01:21.937 ++ SPDK_TEST_NVME_CLI=1 00:01:21.937 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:21.937 ++ SPDK_RUN_UBSAN=1 00:01:21.937 ++ NET_TYPE=phy 00:01:21.937 ++ RUN_NIGHTLY=1 00:01:21.937 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.937 + [[ -n '' ]] 00:01:21.937 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:21.937 + for M in /var/spdk/build-*-manifest.txt 00:01:21.937 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.937 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:21.937 + for M in /var/spdk/build-*-manifest.txt 00:01:21.937 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.937 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:21.937 + for M in /var/spdk/build-*-manifest.txt 00:01:21.937 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.937 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:21.937 ++ uname 00:01:21.937 + [[ Linux == \L\i\n\u\x ]] 00:01:21.937 + sudo dmesg -T 00:01:21.937 + sudo dmesg --clear 00:01:21.937 + dmesg_pid=2422734 00:01:21.937 + [[ Fedora Linux == FreeBSD ]] 00:01:21.937 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.937 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.937 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.937 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.937 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.937 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.937 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.937 + sudo dmesg -Tw 00:01:21.937 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.937 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.937 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.937 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.937 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.937 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.937 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.937 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:21.937 Test configuration: 00:01:21.937 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.937 SPDK_TEST_NVMF=1 00:01:21.937 SPDK_TEST_NVME_CLI=1 00:01:21.937 SPDK_TEST_NVMF_NICS=mlx5 00:01:21.937 SPDK_RUN_UBSAN=1 00:01:21.937 NET_TYPE=phy 00:01:21.937 RUN_NIGHTLY=1 17:10:41 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:21.937 17:10:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:21.937 17:10:41 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.937 17:10:41 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.937 17:10:41 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.937 17:10:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.937 17:10:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.937 17:10:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.937 17:10:41 -- paths/export.sh@5 -- $ export PATH 00:01:21.937 17:10:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.937 17:10:41 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:21.937 17:10:41 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:21.937 17:10:41 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731168641.XXXXXX 00:01:21.937 17:10:41 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731168641.n5SYK1 00:01:21.937 17:10:41 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:21.937 17:10:41 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:21.937 17:10:41 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:21.937 17:10:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.938 17:10:41 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.938 17:10:41 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:21.938 17:10:41 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:21.938 17:10:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.938 17:10:41 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:21.938 17:10:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.938 17:10:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.938 17:10:41 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:21.938 17:10:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.938 Sat Nov 9 04:10:41 PM UTC 2024 00:01:21.938 17:10:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.938 LTS-67-gc13c99a5e 00:01:21.938 17:10:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.938 17:10:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.938 17:10:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.938 17:10:41 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:21.938 17:10:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:21.938 17:10:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.938 ************************************ 00:01:21.938 START TEST ubsan 00:01:21.938 ************************************ 00:01:21.938 17:10:41 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:21.938 using ubsan 00:01:21.938 00:01:21.938 real 0m0.000s 00:01:21.938 user 0m0.000s 00:01:21.938 sys 0m0.000s 00:01:21.938 17:10:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:21.938 17:10:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.938 ************************************ 00:01:21.938 END TEST ubsan 00:01:21.938 ************************************ 00:01:21.938 17:10:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.938 17:10:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.938 17:10:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.938 17:10:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:21.938 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:21.938 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:22.505 Using 'verbs' RDMA provider 00:01:34.968 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:49.843 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.843 Creating mk/config.mk...done. 00:01:49.843 Creating mk/cc.flags.mk...done. 00:01:49.843 Type 'make' to build. 00:01:49.843 17:11:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:49.843 17:11:08 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:49.843 17:11:08 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:49.843 17:11:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.843 ************************************ 00:01:49.843 START TEST make 00:01:49.843 ************************************ 00:01:49.843 17:11:08 -- common/autotest_common.sh@1114 -- $ make -j112 00:01:49.843 make[1]: Nothing to be done for 'all'. 00:01:56.406 The Meson build system 00:01:56.406 Version: 1.5.0 00:01:56.407 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:56.407 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:56.407 Build type: native build 00:01:56.407 Program cat found: YES (/usr/bin/cat) 00:01:56.407 Project name: DPDK 00:01:56.407 Project version: 23.11.0 00:01:56.407 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.407 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.407 Host machine cpu family: x86_64 00:01:56.407 Host machine cpu: x86_64 00:01:56.407 Message: ## Building in Developer Mode ## 00:01:56.407 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.407 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.407 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.407 Program python3 found: YES (/usr/bin/python3) 00:01:56.407 Program cat found: YES (/usr/bin/cat) 00:01:56.407 Compiler for C supports arguments -march=native: YES 00:01:56.407 Checking for size of "void *" : 8 00:01:56.407 Checking for size of "void *" : 8 (cached) 00:01:56.407 Library m found: YES 00:01:56.407 Library numa found: YES 00:01:56.407 Has header "numaif.h" : YES 00:01:56.407 Library fdt found: NO 00:01:56.407 Library execinfo found: NO 00:01:56.407 Has header "execinfo.h" : YES 00:01:56.407 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.407 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.407 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.407 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.407 Run-time dependency openssl found: YES 3.1.1 00:01:56.407 Run-time dependency libpcap found: YES 1.10.4 00:01:56.407 Has header "pcap.h" with dependency libpcap: YES 00:01:56.407 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.407 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.407 Compiler for C supports arguments -Wformat: YES 00:01:56.407 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.407 Compiler for C supports arguments -Wformat-security: NO 00:01:56.407 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.407 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.407 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.407 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.407 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.407 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.407 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.407 Compiler for C supports arguments -Wundef: YES 00:01:56.407 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.407 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.407 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.407 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.407 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.407 Program objdump found: YES (/usr/bin/objdump) 00:01:56.407 Compiler for C supports arguments -mavx512f: YES 00:01:56.407 Checking if "AVX512 checking" compiles: YES 00:01:56.407 Fetching value of define "__SSE4_2__" : 1 00:01:56.407 Fetching value of define "__AES__" : 1 00:01:56.407 Fetching value of define "__AVX__" : 1 00:01:56.407 Fetching value of define "__AVX2__" : 1 00:01:56.407 Fetching value of define "__AVX512BW__" : 1 00:01:56.407 Fetching value of define "__AVX512CD__" : 1 00:01:56.407 Fetching value of define "__AVX512DQ__" : 1 00:01:56.407 Fetching value of define "__AVX512F__" : 1 00:01:56.407 Fetching value of define "__AVX512VL__" : 1 00:01:56.407 Fetching value of define "__PCLMUL__" : 1 00:01:56.407 Fetching value of define "__RDRND__" : 1 00:01:56.407 Fetching value of define "__RDSEED__" : 1 00:01:56.407 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.407 Fetching value of define "__znver1__" : (undefined) 00:01:56.407 Fetching value of define "__znver2__" : (undefined) 00:01:56.407 Fetching value of define "__znver3__" : (undefined) 00:01:56.407 Fetching value of define "__znver4__" : (undefined) 00:01:56.407 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.407 Message: lib/log: Defining dependency "log" 00:01:56.407 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.407 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.407 Checking for function "getentropy" : NO 00:01:56.407 Message: lib/eal: Defining dependency "eal" 00:01:56.407 Message: lib/ring: Defining dependency "ring" 00:01:56.407 Message: lib/rcu: Defining dependency "rcu" 00:01:56.407 Message: lib/mempool: Defining dependency "mempool" 00:01:56.407 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.407 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.407 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.407 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.407 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.407 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.407 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:56.407 Compiler for C supports arguments -mpclmul: YES 00:01:56.407 Compiler for C supports arguments -maes: YES 00:01:56.407 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.407 Compiler for C supports arguments -mavx512bw: YES 00:01:56.407 Compiler for C supports arguments -mavx512dq: YES 00:01:56.407 Compiler for C supports arguments -mavx512vl: YES 00:01:56.407 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.407 Compiler for C supports arguments -mavx2: YES 00:01:56.407 Compiler for C supports arguments -mavx: YES 00:01:56.407 Message: lib/net: Defining dependency "net" 00:01:56.407 Message: lib/meter: Defining dependency "meter" 00:01:56.407 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.407 Message: lib/pci: Defining dependency "pci" 00:01:56.407 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.407 Message: lib/hash: Defining dependency "hash" 00:01:56.407 Message: lib/timer: Defining dependency "timer" 00:01:56.407 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.407 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.407 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.407 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.407 Message: lib/power: Defining dependency "power" 00:01:56.407 Message: lib/reorder: Defining dependency "reorder" 00:01:56.407 Message: lib/security: Defining dependency "security" 00:01:56.407 Has header "linux/userfaultfd.h" : YES 00:01:56.407 Has header "linux/vduse.h" : YES 00:01:56.407 Message: lib/vhost: Defining dependency "vhost" 00:01:56.407 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.407 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.407 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.407 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.407 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.407 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.407 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.407 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.407 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.407 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.407 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:56.407 Configuring doxy-api-html.conf using configuration 00:01:56.407 Configuring doxy-api-man.conf using configuration 00:01:56.407 Program mandb found: YES (/usr/bin/mandb) 00:01:56.407 Program sphinx-build found: NO 00:01:56.407 Configuring rte_build_config.h using configuration 00:01:56.407 Message: 00:01:56.407 ================= 00:01:56.407 Applications Enabled 00:01:56.407 ================= 00:01:56.407 00:01:56.407 apps: 00:01:56.407 00:01:56.407 00:01:56.407 Message: 00:01:56.407 ================= 00:01:56.407 Libraries Enabled 00:01:56.407 ================= 00:01:56.407 00:01:56.407 libs: 00:01:56.407 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.407 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.407 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.407 00:01:56.407 Message: 00:01:56.407 =============== 00:01:56.407 Drivers Enabled 00:01:56.407 =============== 00:01:56.407 00:01:56.407 common: 00:01:56.407 00:01:56.407 bus: 00:01:56.407 pci, vdev, 00:01:56.407 mempool: 00:01:56.407 ring, 00:01:56.407 dma: 00:01:56.407 00:01:56.407 net: 00:01:56.407 00:01:56.407 crypto: 00:01:56.407 00:01:56.407 compress: 00:01:56.407 00:01:56.407 vdpa: 00:01:56.407 00:01:56.407 00:01:56.407 Message: 00:01:56.407 ================= 00:01:56.407 Content Skipped 00:01:56.407 ================= 00:01:56.407 00:01:56.407 apps: 00:01:56.407 dumpcap: explicitly disabled via build config 00:01:56.407 graph: explicitly disabled via build config 00:01:56.407 pdump: explicitly disabled via build config 00:01:56.407 proc-info: explicitly disabled via build config 00:01:56.407 test-acl: explicitly disabled via build config 00:01:56.407 test-bbdev: explicitly disabled via build config 00:01:56.407 test-cmdline: explicitly disabled via build config 00:01:56.407 test-compress-perf: explicitly disabled via build config 00:01:56.407 test-crypto-perf: explicitly disabled via build config 00:01:56.407 test-dma-perf: explicitly disabled via build config 00:01:56.407 test-eventdev: explicitly disabled via build config 00:01:56.407 test-fib: explicitly disabled via build config 00:01:56.407 test-flow-perf: explicitly disabled via build config 00:01:56.407 test-gpudev: explicitly disabled via build config 00:01:56.407 test-mldev: explicitly disabled via build config 00:01:56.407 test-pipeline: explicitly disabled via build config 00:01:56.407 test-pmd: explicitly disabled via build config 00:01:56.407 test-regex: explicitly disabled via build config 00:01:56.407 test-sad: explicitly disabled via build config 00:01:56.407 test-security-perf: explicitly disabled via build config 00:01:56.407 00:01:56.407 libs: 00:01:56.407 metrics: explicitly disabled via build config 00:01:56.408 acl: explicitly disabled via build config 00:01:56.408 bbdev: explicitly disabled via build config 00:01:56.408 bitratestats: explicitly disabled via build config 00:01:56.408 bpf: explicitly disabled via build config 00:01:56.408 cfgfile: explicitly disabled via build config 00:01:56.408 distributor: explicitly disabled via build config 00:01:56.408 efd: explicitly disabled via build config 00:01:56.408 eventdev: explicitly disabled via build config 00:01:56.408 dispatcher: explicitly disabled via build config 00:01:56.408 gpudev: explicitly disabled via build config 00:01:56.408 gro: explicitly disabled via build config 00:01:56.408 gso: explicitly disabled via build config 00:01:56.408 ip_frag: explicitly disabled via build config 00:01:56.408 jobstats: explicitly disabled via build config 00:01:56.408 latencystats: explicitly disabled via build config 00:01:56.408 lpm: explicitly disabled via build config 00:01:56.408 member: explicitly disabled via build config 00:01:56.408 pcapng: explicitly disabled via build config 00:01:56.408 rawdev: explicitly disabled via build config 00:01:56.408 regexdev: explicitly disabled via build config 00:01:56.408 mldev: explicitly disabled via build config 00:01:56.408 rib: explicitly disabled via build config 00:01:56.408 sched: explicitly disabled via build config 00:01:56.408 stack: explicitly disabled via build config 00:01:56.408 ipsec: explicitly disabled via build config 00:01:56.408 pdcp: explicitly disabled via build config 00:01:56.408 fib: explicitly disabled via build config 00:01:56.408 port: explicitly disabled via build config 00:01:56.408 pdump: explicitly disabled via build config 00:01:56.408 table: explicitly disabled via build config 00:01:56.408 pipeline: explicitly disabled via build config 00:01:56.408 graph: explicitly disabled via build config 00:01:56.408 node: explicitly disabled via build config 00:01:56.408 00:01:56.408 drivers: 00:01:56.408 common/cpt: not in enabled drivers build config 00:01:56.408 common/dpaax: not in enabled drivers build config 00:01:56.408 common/iavf: not in enabled drivers build config 00:01:56.408 common/idpf: not in enabled drivers build config 00:01:56.408 common/mvep: not in enabled drivers build config 00:01:56.408 common/octeontx: not in enabled drivers build config 00:01:56.408 bus/auxiliary: not in enabled drivers build config 00:01:56.408 bus/cdx: not in enabled drivers build config 00:01:56.408 bus/dpaa: not in enabled drivers build config 00:01:56.408 bus/fslmc: not in enabled drivers build config 00:01:56.408 bus/ifpga: not in enabled drivers build config 00:01:56.408 bus/platform: not in enabled drivers build config 00:01:56.408 bus/vmbus: not in enabled drivers build config 00:01:56.408 common/cnxk: not in enabled drivers build config 00:01:56.408 common/mlx5: not in enabled drivers build config 00:01:56.408 common/nfp: not in enabled drivers build config 00:01:56.408 common/qat: not in enabled drivers build config 00:01:56.408 common/sfc_efx: not in enabled drivers build config 00:01:56.408 mempool/bucket: not in enabled drivers build config 00:01:56.408 mempool/cnxk: not in enabled drivers build config 00:01:56.408 mempool/dpaa: not in enabled drivers build config 00:01:56.408 mempool/dpaa2: not in enabled drivers build config 00:01:56.408 mempool/octeontx: not in enabled drivers build config 00:01:56.408 mempool/stack: not in enabled drivers build config 00:01:56.408 dma/cnxk: not in enabled drivers build config 00:01:56.408 dma/dpaa: not in enabled drivers build config 00:01:56.408 dma/dpaa2: not in enabled drivers build config 00:01:56.408 dma/hisilicon: not in enabled drivers build config 00:01:56.408 dma/idxd: not in enabled drivers build config 00:01:56.408 dma/ioat: not in enabled drivers build config 00:01:56.408 dma/skeleton: not in enabled drivers build config 00:01:56.408 net/af_packet: not in enabled drivers build config 00:01:56.408 net/af_xdp: not in enabled drivers build config 00:01:56.408 net/ark: not in enabled drivers build config 00:01:56.408 net/atlantic: not in enabled drivers build config 00:01:56.408 net/avp: not in enabled drivers build config 00:01:56.408 net/axgbe: not in enabled drivers build config 00:01:56.408 net/bnx2x: not in enabled drivers build config 00:01:56.408 net/bnxt: not in enabled drivers build config 00:01:56.408 net/bonding: not in enabled drivers build config 00:01:56.408 net/cnxk: not in enabled drivers build config 00:01:56.408 net/cpfl: not in enabled drivers build config 00:01:56.408 net/cxgbe: not in enabled drivers build config 00:01:56.408 net/dpaa: not in enabled drivers build config 00:01:56.408 net/dpaa2: not in enabled drivers build config 00:01:56.408 net/e1000: not in enabled drivers build config 00:01:56.408 net/ena: not in enabled drivers build config 00:01:56.408 net/enetc: not in enabled drivers build config 00:01:56.408 net/enetfec: not in enabled drivers build config 00:01:56.408 net/enic: not in enabled drivers build config 00:01:56.408 net/failsafe: not in enabled drivers build config 00:01:56.408 net/fm10k: not in enabled drivers build config 00:01:56.408 net/gve: not in enabled drivers build config 00:01:56.408 net/hinic: not in enabled drivers build config 00:01:56.408 net/hns3: not in enabled drivers build config 00:01:56.408 net/i40e: not in enabled drivers build config 00:01:56.408 net/iavf: not in enabled drivers build config 00:01:56.408 net/ice: not in enabled drivers build config 00:01:56.408 net/idpf: not in enabled drivers build config 00:01:56.408 net/igc: not in enabled drivers build config 00:01:56.408 net/ionic: not in enabled drivers build config 00:01:56.408 net/ipn3ke: not in enabled drivers build config 00:01:56.408 net/ixgbe: not in enabled drivers build config 00:01:56.408 net/mana: not in enabled drivers build config 00:01:56.408 net/memif: not in enabled drivers build config 00:01:56.408 net/mlx4: not in enabled drivers build config 00:01:56.408 net/mlx5: not in enabled drivers build config 00:01:56.408 net/mvneta: not in enabled drivers build config 00:01:56.408 net/mvpp2: not in enabled drivers build config 00:01:56.408 net/netvsc: not in enabled drivers build config 00:01:56.408 net/nfb: not in enabled drivers build config 00:01:56.408 net/nfp: not in enabled drivers build config 00:01:56.408 net/ngbe: not in enabled drivers build config 00:01:56.408 net/null: not in enabled drivers build config 00:01:56.408 net/octeontx: not in enabled drivers build config 00:01:56.408 net/octeon_ep: not in enabled drivers build config 00:01:56.408 net/pcap: not in enabled drivers build config 00:01:56.408 net/pfe: not in enabled drivers build config 00:01:56.408 net/qede: not in enabled drivers build config 00:01:56.408 net/ring: not in enabled drivers build config 00:01:56.408 net/sfc: not in enabled drivers build config 00:01:56.408 net/softnic: not in enabled drivers build config 00:01:56.408 net/tap: not in enabled drivers build config 00:01:56.408 net/thunderx: not in enabled drivers build config 00:01:56.408 net/txgbe: not in enabled drivers build config 00:01:56.408 net/vdev_netvsc: not in enabled drivers build config 00:01:56.408 net/vhost: not in enabled drivers build config 00:01:56.408 net/virtio: not in enabled drivers build config 00:01:56.408 net/vmxnet3: not in enabled drivers build config 00:01:56.408 raw/*: missing internal dependency, "rawdev" 00:01:56.408 crypto/armv8: not in enabled drivers build config 00:01:56.408 crypto/bcmfs: not in enabled drivers build config 00:01:56.408 crypto/caam_jr: not in enabled drivers build config 00:01:56.408 crypto/ccp: not in enabled drivers build config 00:01:56.408 crypto/cnxk: not in enabled drivers build config 00:01:56.408 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.408 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.408 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.408 crypto/mlx5: not in enabled drivers build config 00:01:56.408 crypto/mvsam: not in enabled drivers build config 00:01:56.408 crypto/nitrox: not in enabled drivers build config 00:01:56.408 crypto/null: not in enabled drivers build config 00:01:56.408 crypto/octeontx: not in enabled drivers build config 00:01:56.408 crypto/openssl: not in enabled drivers build config 00:01:56.408 crypto/scheduler: not in enabled drivers build config 00:01:56.408 crypto/uadk: not in enabled drivers build config 00:01:56.408 crypto/virtio: not in enabled drivers build config 00:01:56.408 compress/isal: not in enabled drivers build config 00:01:56.408 compress/mlx5: not in enabled drivers build config 00:01:56.408 compress/octeontx: not in enabled drivers build config 00:01:56.408 compress/zlib: not in enabled drivers build config 00:01:56.408 regex/*: missing internal dependency, "regexdev" 00:01:56.408 ml/*: missing internal dependency, "mldev" 00:01:56.408 vdpa/ifc: not in enabled drivers build config 00:01:56.408 vdpa/mlx5: not in enabled drivers build config 00:01:56.408 vdpa/nfp: not in enabled drivers build config 00:01:56.408 vdpa/sfc: not in enabled drivers build config 00:01:56.408 event/*: missing internal dependency, "eventdev" 00:01:56.408 baseband/*: missing internal dependency, "bbdev" 00:01:56.408 gpu/*: missing internal dependency, "gpudev" 00:01:56.408 00:01:56.408 00:01:56.667 Build targets in project: 85 00:01:56.667 00:01:56.667 DPDK 23.11.0 00:01:56.667 00:01:56.667 User defined options 00:01:56.667 buildtype : debug 00:01:56.667 default_library : shared 00:01:56.667 libdir : lib 00:01:56.667 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:56.667 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:56.667 c_link_args : 00:01:56.667 cpu_instruction_set: native 00:01:56.667 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:56.667 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:56.667 enable_docs : false 00:01:56.667 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:56.667 enable_kmods : false 00:01:56.667 tests : false 00:01:56.667 00:01:56.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.242 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.242 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.242 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.242 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.242 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.242 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.242 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.242 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.242 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.242 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.242 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.242 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.242 [12/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.242 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.242 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.242 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.242 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.242 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.242 [18/265] Linking static target lib/librte_kvargs.a 00:01:57.242 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.242 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.242 [21/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.242 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.242 [23/265] Linking static target lib/librte_log.a 00:01:57.242 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.504 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.504 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.504 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.504 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.504 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.504 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.504 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.504 [32/265] Linking static target lib/librte_pci.a 00:01:57.504 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.504 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.504 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.504 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.504 [37/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.504 [38/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.504 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.504 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.763 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.763 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.763 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.763 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.763 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.763 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.763 [47/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.763 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.763 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.763 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.763 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.763 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.763 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.763 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.763 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.763 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.763 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.763 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.763 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.763 [60/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:57.763 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.763 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.763 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.763 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.763 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.763 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.763 [67/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.763 [68/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:57.763 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.763 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.763 [71/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.763 [72/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:57.763 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.763 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.763 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.763 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.763 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.763 [78/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.763 [79/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.763 [80/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.763 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.763 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.763 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.763 [84/265] Linking static target lib/librte_meter.a 00:01:57.763 [85/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.763 [86/265] Linking static target lib/librte_telemetry.a 00:01:57.763 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.763 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.763 [89/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.763 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.763 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.763 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.763 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.763 [94/265] Linking static target lib/librte_ring.a 00:01:57.763 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.763 [96/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.763 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.763 [98/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.763 [99/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.763 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.763 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.763 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.763 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.763 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.763 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.763 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.763 [107/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.763 [108/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.763 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.763 [110/265] Linking static target lib/librte_cmdline.a 00:01:58.023 [111/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.023 [112/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.023 [113/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.023 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.023 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.023 [116/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.023 [117/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.023 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.023 [119/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.023 [120/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.023 [121/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.023 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.023 [123/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.023 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.023 [125/265] Linking static target lib/librte_timer.a 00:01:58.023 [126/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.023 [127/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.023 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.023 [129/265] Linking static target lib/librte_rcu.a 00:01:58.023 [130/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.023 [131/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.023 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.023 [133/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.023 [134/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.023 [135/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.023 [136/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.023 [137/265] Linking static target lib/librte_net.a 00:01:58.023 [138/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.023 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.023 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.023 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.023 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.023 [143/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.023 [144/265] Linking static target lib/librte_mempool.a 00:01:58.023 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.023 [146/265] Linking static target lib/librte_compressdev.a 00:01:58.023 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.023 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.023 [149/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.023 [150/265] Linking static target lib/librte_dmadev.a 00:01:58.023 [151/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.023 [152/265] Linking static target lib/librte_eal.a 00:01:58.023 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.023 [154/265] Linking static target lib/librte_power.a 00:01:58.023 [155/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.023 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.023 [157/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.023 [158/265] Linking static target lib/librte_reorder.a 00:01:58.023 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.023 [160/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.023 [161/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.023 [162/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.024 [163/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.024 [164/265] Linking static target lib/librte_mbuf.a 00:01:58.024 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.024 [166/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.024 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.024 [168/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.024 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.024 [170/265] Linking target lib/librte_log.so.24.0 00:01:58.283 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.283 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.283 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.283 [174/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.283 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.283 [176/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.283 [177/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.283 [178/265] Linking static target lib/librte_security.a 00:01:58.283 [179/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.283 [180/265] Linking static target lib/librte_hash.a 00:01:58.283 [181/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.283 [182/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.283 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.283 [184/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.283 [185/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.283 [186/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.283 [187/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.283 [188/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.283 [189/265] Linking target lib/librte_kvargs.so.24.0 00:01:58.283 [190/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.283 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.283 [192/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.283 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.283 [194/265] Linking static target drivers/librte_bus_vdev.a 00:01:58.284 [195/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.543 [196/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.543 [197/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.543 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.543 [199/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.543 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.543 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.543 [202/265] Linking target lib/librte_telemetry.so.24.0 00:01:58.543 [203/265] Linking static target drivers/librte_bus_pci.a 00:01:58.543 [204/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.543 [205/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:58.543 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.543 [207/265] Linking static target lib/librte_cryptodev.a 00:01:58.543 [208/265] Linking static target drivers/librte_mempool_ring.a 00:01:58.543 [209/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.543 [210/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.543 [211/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.543 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:58.802 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.802 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.802 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.802 [216/265] Linking static target lib/librte_ethdev.a 00:01:58.803 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.062 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.062 [219/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.062 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.062 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.062 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.322 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.322 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.892 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.892 [226/265] Linking static target lib/librte_vhost.a 00:02:00.887 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.269 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.845 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.227 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.487 [231/265] Linking target lib/librte_eal.so.24.0 00:02:10.487 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:10.487 [233/265] Linking target lib/librte_meter.so.24.0 00:02:10.747 [234/265] Linking target lib/librte_timer.so.24.0 00:02:10.747 [235/265] Linking target lib/librte_ring.so.24.0 00:02:10.747 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:10.747 [237/265] Linking target lib/librte_pci.so.24.0 00:02:10.747 [238/265] Linking target lib/librte_dmadev.so.24.0 00:02:10.747 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:10.747 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:10.747 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:10.747 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:10.747 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:10.747 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:10.747 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:10.747 [246/265] Linking target lib/librte_mempool.so.24.0 00:02:11.008 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:11.008 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:11.008 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:11.008 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:11.266 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:11.266 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:11.266 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:11.266 [254/265] Linking target lib/librte_net.so.24.0 00:02:11.266 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:11.266 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:11.266 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:11.266 [258/265] Linking target lib/librte_hash.so.24.0 00:02:11.266 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:11.526 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:11.526 [261/265] Linking target lib/librte_security.so.24.0 00:02:11.526 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:11.526 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:11.526 [264/265] Linking target lib/librte_power.so.24.0 00:02:11.526 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:11.526 INFO: autodetecting backend as ninja 00:02:11.526 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:12.464 CC lib/ut/ut.o 00:02:12.464 CC lib/log/log_deprecated.o 00:02:12.464 CC lib/log/log.o 00:02:12.464 CC lib/log/log_flags.o 00:02:12.464 CC lib/ut_mock/mock.o 00:02:12.724 LIB libspdk_ut.a 00:02:12.724 LIB libspdk_ut_mock.a 00:02:12.724 LIB libspdk_log.a 00:02:12.724 SO libspdk_ut.so.1.0 00:02:12.724 SO libspdk_ut_mock.so.5.0 00:02:12.724 SO libspdk_log.so.6.1 00:02:12.724 SYMLINK libspdk_ut.so 00:02:12.724 SYMLINK libspdk_ut_mock.so 00:02:12.724 SYMLINK libspdk_log.so 00:02:12.983 CXX lib/trace_parser/trace.o 00:02:12.983 CC lib/dma/dma.o 00:02:12.983 CC lib/ioat/ioat.o 00:02:12.983 CC lib/util/cpuset.o 00:02:12.983 CC lib/util/base64.o 00:02:12.983 CC lib/util/bit_array.o 00:02:12.983 CC lib/util/crc16.o 00:02:12.983 CC lib/util/crc32.o 00:02:12.983 CC lib/util/crc32c.o 00:02:12.983 CC lib/util/crc32_ieee.o 00:02:12.983 CC lib/util/crc64.o 00:02:12.983 CC lib/util/dif.o 00:02:12.983 CC lib/util/fd.o 00:02:12.983 CC lib/util/file.o 00:02:12.983 CC lib/util/hexlify.o 00:02:12.983 CC lib/util/iov.o 00:02:12.983 CC lib/util/math.o 00:02:12.983 CC lib/util/pipe.o 00:02:12.983 CC lib/util/strerror_tls.o 00:02:12.983 CC lib/util/string.o 00:02:12.983 CC lib/util/uuid.o 00:02:12.983 CC lib/util/fd_group.o 00:02:12.983 CC lib/util/xor.o 00:02:12.983 CC lib/util/zipf.o 00:02:13.243 CC lib/vfio_user/host/vfio_user_pci.o 00:02:13.243 CC lib/vfio_user/host/vfio_user.o 00:02:13.243 LIB libspdk_dma.a 00:02:13.243 SO libspdk_dma.so.3.0 00:02:13.243 LIB libspdk_ioat.a 00:02:13.243 SO libspdk_ioat.so.6.0 00:02:13.243 SYMLINK libspdk_dma.so 00:02:13.243 LIB libspdk_vfio_user.a 00:02:13.502 SYMLINK libspdk_ioat.so 00:02:13.502 SO libspdk_vfio_user.so.4.0 00:02:13.502 SYMLINK libspdk_vfio_user.so 00:02:13.502 LIB libspdk_util.a 00:02:13.502 SO libspdk_util.so.8.0 00:02:13.762 SYMLINK libspdk_util.so 00:02:13.762 LIB libspdk_trace_parser.a 00:02:13.762 SO libspdk_trace_parser.so.4.0 00:02:13.762 SYMLINK libspdk_trace_parser.so 00:02:13.762 CC lib/vmd/vmd.o 00:02:13.762 CC lib/vmd/led.o 00:02:13.762 CC lib/env_dpdk/env.o 00:02:13.762 CC lib/env_dpdk/memory.o 00:02:13.762 CC lib/env_dpdk/pci.o 00:02:13.762 CC lib/env_dpdk/threads.o 00:02:13.762 CC lib/env_dpdk/init.o 00:02:13.762 CC lib/env_dpdk/pci_vmd.o 00:02:13.762 CC lib/env_dpdk/pci_ioat.o 00:02:13.762 CC lib/env_dpdk/pci_virtio.o 00:02:13.762 CC lib/env_dpdk/pci_idxd.o 00:02:13.762 CC lib/env_dpdk/pci_event.o 00:02:13.762 CC lib/env_dpdk/sigbus_handler.o 00:02:13.762 CC lib/env_dpdk/pci_dpdk.o 00:02:13.762 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.762 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:14.021 CC lib/idxd/idxd.o 00:02:14.021 CC lib/idxd/idxd_user.o 00:02:14.021 CC lib/idxd/idxd_kernel.o 00:02:14.021 CC lib/rdma/common.o 00:02:14.021 CC lib/rdma/rdma_verbs.o 00:02:14.021 CC lib/json/json_parse.o 00:02:14.021 CC lib/conf/conf.o 00:02:14.021 CC lib/json/json_util.o 00:02:14.021 CC lib/json/json_write.o 00:02:14.021 LIB libspdk_conf.a 00:02:14.021 LIB libspdk_json.a 00:02:14.280 SO libspdk_conf.so.5.0 00:02:14.280 LIB libspdk_rdma.a 00:02:14.280 SO libspdk_json.so.5.1 00:02:14.280 SO libspdk_rdma.so.5.0 00:02:14.280 SYMLINK libspdk_conf.so 00:02:14.281 SYMLINK libspdk_json.so 00:02:14.281 SYMLINK libspdk_rdma.so 00:02:14.281 LIB libspdk_idxd.a 00:02:14.281 LIB libspdk_vmd.a 00:02:14.281 SO libspdk_vmd.so.5.0 00:02:14.281 SO libspdk_idxd.so.11.0 00:02:14.540 SYMLINK libspdk_vmd.so 00:02:14.540 SYMLINK libspdk_idxd.so 00:02:14.540 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.540 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.540 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.540 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.799 LIB libspdk_jsonrpc.a 00:02:14.799 SO libspdk_jsonrpc.so.5.1 00:02:14.799 SYMLINK libspdk_jsonrpc.so 00:02:14.799 LIB libspdk_env_dpdk.a 00:02:14.799 SO libspdk_env_dpdk.so.13.0 00:02:15.059 SYMLINK libspdk_env_dpdk.so 00:02:15.059 CC lib/rpc/rpc.o 00:02:15.319 LIB libspdk_rpc.a 00:02:15.319 SO libspdk_rpc.so.5.0 00:02:15.319 SYMLINK libspdk_rpc.so 00:02:15.581 CC lib/sock/sock.o 00:02:15.581 CC lib/sock/sock_rpc.o 00:02:15.581 CC lib/trace/trace.o 00:02:15.581 CC lib/trace/trace_flags.o 00:02:15.581 CC lib/trace/trace_rpc.o 00:02:15.581 CC lib/notify/notify.o 00:02:15.581 CC lib/notify/notify_rpc.o 00:02:15.942 LIB libspdk_notify.a 00:02:15.942 LIB libspdk_trace.a 00:02:15.942 SO libspdk_notify.so.5.0 00:02:15.942 SO libspdk_trace.so.9.0 00:02:15.942 SYMLINK libspdk_notify.so 00:02:15.942 LIB libspdk_sock.a 00:02:15.942 SYMLINK libspdk_trace.so 00:02:15.942 SO libspdk_sock.so.8.0 00:02:15.942 SYMLINK libspdk_sock.so 00:02:16.277 CC lib/thread/iobuf.o 00:02:16.277 CC lib/thread/thread.o 00:02:16.277 CC lib/nvme/nvme_fabric.o 00:02:16.277 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:16.277 CC lib/nvme/nvme_ctrlr.o 00:02:16.277 CC lib/nvme/nvme_ns.o 00:02:16.277 CC lib/nvme/nvme_ns_cmd.o 00:02:16.277 CC lib/nvme/nvme_pcie_common.o 00:02:16.277 CC lib/nvme/nvme_pcie.o 00:02:16.277 CC lib/nvme/nvme_qpair.o 00:02:16.277 CC lib/nvme/nvme_discovery.o 00:02:16.277 CC lib/nvme/nvme.o 00:02:16.277 CC lib/nvme/nvme_quirks.o 00:02:16.277 CC lib/nvme/nvme_transport.o 00:02:16.277 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:16.277 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:16.277 CC lib/nvme/nvme_tcp.o 00:02:16.277 CC lib/nvme/nvme_opal.o 00:02:16.277 CC lib/nvme/nvme_io_msg.o 00:02:16.277 CC lib/nvme/nvme_poll_group.o 00:02:16.277 CC lib/nvme/nvme_zns.o 00:02:16.277 CC lib/nvme/nvme_cuse.o 00:02:16.277 CC lib/nvme/nvme_vfio_user.o 00:02:16.277 CC lib/nvme/nvme_rdma.o 00:02:17.216 LIB libspdk_thread.a 00:02:17.216 SO libspdk_thread.so.9.0 00:02:17.216 SYMLINK libspdk_thread.so 00:02:17.475 CC lib/blob/request.o 00:02:17.475 CC lib/blob/blobstore.o 00:02:17.475 CC lib/blob/zeroes.o 00:02:17.475 CC lib/blob/blob_bs_dev.o 00:02:17.475 CC lib/virtio/virtio.o 00:02:17.475 CC lib/virtio/virtio_vhost_user.o 00:02:17.475 CC lib/virtio/virtio_vfio_user.o 00:02:17.475 CC lib/virtio/virtio_pci.o 00:02:17.475 CC lib/init/json_config.o 00:02:17.475 CC lib/init/subsystem.o 00:02:17.475 CC lib/init/subsystem_rpc.o 00:02:17.475 CC lib/init/rpc.o 00:02:17.475 CC lib/accel/accel_rpc.o 00:02:17.475 CC lib/accel/accel.o 00:02:17.475 CC lib/accel/accel_sw.o 00:02:17.735 LIB libspdk_nvme.a 00:02:17.735 LIB libspdk_init.a 00:02:17.735 SO libspdk_nvme.so.12.0 00:02:17.735 LIB libspdk_virtio.a 00:02:17.735 SO libspdk_init.so.4.0 00:02:17.735 SO libspdk_virtio.so.6.0 00:02:17.735 SYMLINK libspdk_init.so 00:02:17.735 SYMLINK libspdk_virtio.so 00:02:17.994 SYMLINK libspdk_nvme.so 00:02:17.994 CC lib/event/log_rpc.o 00:02:17.994 CC lib/event/app.o 00:02:17.994 CC lib/event/reactor.o 00:02:17.994 CC lib/event/scheduler_static.o 00:02:17.994 CC lib/event/app_rpc.o 00:02:18.253 LIB libspdk_accel.a 00:02:18.253 SO libspdk_accel.so.14.0 00:02:18.253 SYMLINK libspdk_accel.so 00:02:18.512 LIB libspdk_event.a 00:02:18.512 SO libspdk_event.so.12.0 00:02:18.512 SYMLINK libspdk_event.so 00:02:18.512 CC lib/bdev/bdev.o 00:02:18.512 CC lib/bdev/bdev_rpc.o 00:02:18.512 CC lib/bdev/part.o 00:02:18.512 CC lib/bdev/bdev_zone.o 00:02:18.512 CC lib/bdev/scsi_nvme.o 00:02:19.452 LIB libspdk_blob.a 00:02:19.452 SO libspdk_blob.so.10.1 00:02:19.452 SYMLINK libspdk_blob.so 00:02:19.711 CC lib/lvol/lvol.o 00:02:19.711 CC lib/blobfs/blobfs.o 00:02:19.711 CC lib/blobfs/tree.o 00:02:20.280 LIB libspdk_bdev.a 00:02:20.280 LIB libspdk_blobfs.a 00:02:20.280 SO libspdk_bdev.so.14.0 00:02:20.280 LIB libspdk_lvol.a 00:02:20.280 SO libspdk_blobfs.so.9.0 00:02:20.538 SO libspdk_lvol.so.9.1 00:02:20.538 SYMLINK libspdk_blobfs.so 00:02:20.538 SYMLINK libspdk_bdev.so 00:02:20.538 SYMLINK libspdk_lvol.so 00:02:20.538 CC lib/ublk/ublk.o 00:02:20.538 CC lib/ublk/ublk_rpc.o 00:02:20.797 CC lib/nvmf/ctrlr.o 00:02:20.797 CC lib/nvmf/ctrlr_discovery.o 00:02:20.797 CC lib/nvmf/ctrlr_bdev.o 00:02:20.797 CC lib/nvmf/subsystem.o 00:02:20.797 CC lib/nvmf/nvmf.o 00:02:20.797 CC lib/nvmf/nvmf_rpc.o 00:02:20.797 CC lib/nvmf/transport.o 00:02:20.797 CC lib/nvmf/rdma.o 00:02:20.797 CC lib/nvmf/tcp.o 00:02:20.797 CC lib/ftl/ftl_core.o 00:02:20.797 CC lib/ftl/ftl_init.o 00:02:20.797 CC lib/ftl/ftl_layout.o 00:02:20.797 CC lib/ftl/ftl_debug.o 00:02:20.797 CC lib/ftl/ftl_io.o 00:02:20.797 CC lib/ftl/ftl_sb.o 00:02:20.797 CC lib/ftl/ftl_l2p.o 00:02:20.797 CC lib/ftl/ftl_l2p_flat.o 00:02:20.797 CC lib/nbd/nbd.o 00:02:20.797 CC lib/ftl/ftl_nv_cache.o 00:02:20.797 CC lib/nbd/nbd_rpc.o 00:02:20.797 CC lib/scsi/dev.o 00:02:20.797 CC lib/scsi/lun.o 00:02:20.797 CC lib/ftl/ftl_band.o 00:02:20.797 CC lib/scsi/port.o 00:02:20.797 CC lib/ftl/ftl_band_ops.o 00:02:20.797 CC lib/ftl/ftl_writer.o 00:02:20.797 CC lib/scsi/scsi.o 00:02:20.797 CC lib/ftl/ftl_rq.o 00:02:20.797 CC lib/scsi/scsi_bdev.o 00:02:20.797 CC lib/ftl/ftl_reloc.o 00:02:20.797 CC lib/scsi/scsi_pr.o 00:02:20.797 CC lib/ftl/ftl_l2p_cache.o 00:02:20.797 CC lib/scsi/scsi_rpc.o 00:02:20.797 CC lib/ftl/ftl_p2l.o 00:02:20.797 CC lib/scsi/task.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:20.797 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:20.797 CC lib/ftl/utils/ftl_md.o 00:02:20.797 CC lib/ftl/utils/ftl_conf.o 00:02:20.797 CC lib/ftl/utils/ftl_mempool.o 00:02:20.797 CC lib/ftl/utils/ftl_bitmap.o 00:02:20.797 CC lib/ftl/utils/ftl_property.o 00:02:20.797 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.797 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.797 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.797 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.797 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.797 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.797 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.797 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.797 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.797 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.797 CC lib/ftl/base/ftl_base_dev.o 00:02:20.797 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.797 CC lib/ftl/ftl_trace.o 00:02:21.056 LIB libspdk_nbd.a 00:02:21.056 SO libspdk_nbd.so.6.0 00:02:21.316 SYMLINK libspdk_nbd.so 00:02:21.316 LIB libspdk_scsi.a 00:02:21.316 LIB libspdk_ublk.a 00:02:21.316 SO libspdk_ublk.so.2.0 00:02:21.316 SO libspdk_scsi.so.8.0 00:02:21.316 SYMLINK libspdk_ublk.so 00:02:21.316 SYMLINK libspdk_scsi.so 00:02:21.575 LIB libspdk_ftl.a 00:02:21.575 SO libspdk_ftl.so.8.0 00:02:21.575 CC lib/iscsi/conn.o 00:02:21.575 CC lib/iscsi/init_grp.o 00:02:21.575 CC lib/iscsi/iscsi.o 00:02:21.575 CC lib/iscsi/param.o 00:02:21.575 CC lib/vhost/vhost.o 00:02:21.575 CC lib/iscsi/md5.o 00:02:21.575 CC lib/iscsi/portal_grp.o 00:02:21.575 CC lib/vhost/vhost_rpc.o 00:02:21.575 CC lib/vhost/vhost_scsi.o 00:02:21.575 CC lib/vhost/vhost_blk.o 00:02:21.575 CC lib/iscsi/tgt_node.o 00:02:21.575 CC lib/vhost/rte_vhost_user.o 00:02:21.575 CC lib/iscsi/iscsi_subsystem.o 00:02:21.575 CC lib/iscsi/iscsi_rpc.o 00:02:21.575 CC lib/iscsi/task.o 00:02:21.834 SYMLINK libspdk_ftl.so 00:02:22.093 LIB libspdk_nvmf.a 00:02:22.352 SO libspdk_nvmf.so.17.0 00:02:22.352 SYMLINK libspdk_nvmf.so 00:02:22.352 LIB libspdk_vhost.a 00:02:22.611 SO libspdk_vhost.so.7.1 00:02:22.611 SYMLINK libspdk_vhost.so 00:02:22.611 LIB libspdk_iscsi.a 00:02:22.611 SO libspdk_iscsi.so.7.0 00:02:22.871 SYMLINK libspdk_iscsi.so 00:02:23.129 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.389 CC module/blob/bdev/blob_bdev.o 00:02:23.389 CC module/accel/ioat/accel_ioat.o 00:02:23.389 CC module/accel/error/accel_error.o 00:02:23.389 CC module/accel/ioat/accel_ioat_rpc.o 00:02:23.389 CC module/sock/posix/posix.o 00:02:23.389 CC module/scheduler/gscheduler/gscheduler.o 00:02:23.389 CC module/accel/error/accel_error_rpc.o 00:02:23.389 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:23.389 CC module/accel/iaa/accel_iaa_rpc.o 00:02:23.389 CC module/accel/iaa/accel_iaa.o 00:02:23.389 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:23.389 CC module/accel/dsa/accel_dsa.o 00:02:23.389 CC module/accel/dsa/accel_dsa_rpc.o 00:02:23.389 LIB libspdk_env_dpdk_rpc.a 00:02:23.389 SO libspdk_env_dpdk_rpc.so.5.0 00:02:23.389 SYMLINK libspdk_env_dpdk_rpc.so 00:02:23.389 LIB libspdk_scheduler_gscheduler.a 00:02:23.389 LIB libspdk_scheduler_dpdk_governor.a 00:02:23.389 LIB libspdk_accel_error.a 00:02:23.389 SO libspdk_scheduler_gscheduler.so.3.0 00:02:23.389 LIB libspdk_scheduler_dynamic.a 00:02:23.389 LIB libspdk_accel_ioat.a 00:02:23.389 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:23.389 SO libspdk_accel_error.so.1.0 00:02:23.389 LIB libspdk_blob_bdev.a 00:02:23.389 LIB libspdk_accel_iaa.a 00:02:23.389 SO libspdk_scheduler_dynamic.so.3.0 00:02:23.389 SO libspdk_accel_ioat.so.5.0 00:02:23.648 SYMLINK libspdk_scheduler_gscheduler.so 00:02:23.648 SO libspdk_blob_bdev.so.10.1 00:02:23.648 LIB libspdk_accel_dsa.a 00:02:23.648 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:23.648 SO libspdk_accel_iaa.so.2.0 00:02:23.648 SYMLINK libspdk_accel_error.so 00:02:23.648 SYMLINK libspdk_scheduler_dynamic.so 00:02:23.648 SYMLINK libspdk_accel_ioat.so 00:02:23.648 SO libspdk_accel_dsa.so.4.0 00:02:23.648 SYMLINK libspdk_blob_bdev.so 00:02:23.648 SYMLINK libspdk_accel_iaa.so 00:02:23.648 SYMLINK libspdk_accel_dsa.so 00:02:23.908 LIB libspdk_sock_posix.a 00:02:23.908 SO libspdk_sock_posix.so.5.0 00:02:23.908 CC module/bdev/error/vbdev_error.o 00:02:23.908 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.908 CC module/bdev/delay/vbdev_delay.o 00:02:23.908 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.908 CC module/bdev/null/bdev_null.o 00:02:23.908 CC module/bdev/nvme/bdev_nvme.o 00:02:23.908 CC module/bdev/nvme/nvme_rpc.o 00:02:23.908 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.908 CC module/bdev/raid/bdev_raid.o 00:02:23.908 CC module/bdev/null/bdev_null_rpc.o 00:02:23.908 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.908 CC module/bdev/gpt/gpt.o 00:02:23.908 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.908 CC module/bdev/nvme/vbdev_opal.o 00:02:23.908 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.908 CC module/bdev/ftl/bdev_ftl.o 00:02:23.908 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.908 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.908 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.908 CC module/bdev/raid/raid0.o 00:02:23.908 CC module/bdev/split/vbdev_split.o 00:02:23.908 CC module/bdev/raid/raid1.o 00:02:23.908 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.908 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.908 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.908 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.908 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.908 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.908 CC module/bdev/raid/concat.o 00:02:23.908 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.908 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.908 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.908 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.909 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.909 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.909 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.909 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.909 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.909 CC module/bdev/aio/bdev_aio.o 00:02:23.909 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.909 CC module/bdev/malloc/bdev_malloc.o 00:02:23.909 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.909 SYMLINK libspdk_sock_posix.so 00:02:24.168 LIB libspdk_blobfs_bdev.a 00:02:24.168 LIB libspdk_bdev_error.a 00:02:24.168 SO libspdk_blobfs_bdev.so.5.0 00:02:24.168 LIB libspdk_bdev_split.a 00:02:24.168 LIB libspdk_bdev_null.a 00:02:24.168 SO libspdk_bdev_split.so.5.0 00:02:24.168 SO libspdk_bdev_error.so.5.0 00:02:24.168 LIB libspdk_bdev_passthru.a 00:02:24.168 LIB libspdk_bdev_gpt.a 00:02:24.168 SYMLINK libspdk_blobfs_bdev.so 00:02:24.168 LIB libspdk_bdev_ftl.a 00:02:24.168 SO libspdk_bdev_null.so.5.0 00:02:24.168 SO libspdk_bdev_gpt.so.5.0 00:02:24.168 LIB libspdk_bdev_zone_block.a 00:02:24.168 SO libspdk_bdev_passthru.so.5.0 00:02:24.168 LIB libspdk_bdev_delay.a 00:02:24.168 SYMLINK libspdk_bdev_split.so 00:02:24.168 SO libspdk_bdev_ftl.so.5.0 00:02:24.168 SYMLINK libspdk_bdev_error.so 00:02:24.168 SO libspdk_bdev_zone_block.so.5.0 00:02:24.168 LIB libspdk_bdev_aio.a 00:02:24.168 LIB libspdk_bdev_iscsi.a 00:02:24.427 SYMLINK libspdk_bdev_gpt.so 00:02:24.427 LIB libspdk_bdev_malloc.a 00:02:24.427 SO libspdk_bdev_delay.so.5.0 00:02:24.427 SYMLINK libspdk_bdev_null.so 00:02:24.427 SYMLINK libspdk_bdev_passthru.so 00:02:24.427 SO libspdk_bdev_iscsi.so.5.0 00:02:24.428 SO libspdk_bdev_aio.so.5.0 00:02:24.428 SO libspdk_bdev_malloc.so.5.0 00:02:24.428 SYMLINK libspdk_bdev_zone_block.so 00:02:24.428 SYMLINK libspdk_bdev_ftl.so 00:02:24.428 SYMLINK libspdk_bdev_delay.so 00:02:24.428 SYMLINK libspdk_bdev_iscsi.so 00:02:24.428 SYMLINK libspdk_bdev_malloc.so 00:02:24.428 LIB libspdk_bdev_lvol.a 00:02:24.428 SYMLINK libspdk_bdev_aio.so 00:02:24.428 LIB libspdk_bdev_virtio.a 00:02:24.428 SO libspdk_bdev_lvol.so.5.0 00:02:24.428 SO libspdk_bdev_virtio.so.5.0 00:02:24.428 SYMLINK libspdk_bdev_lvol.so 00:02:24.428 SYMLINK libspdk_bdev_virtio.so 00:02:24.686 LIB libspdk_bdev_raid.a 00:02:24.686 SO libspdk_bdev_raid.so.5.0 00:02:24.686 SYMLINK libspdk_bdev_raid.so 00:02:25.625 LIB libspdk_bdev_nvme.a 00:02:25.625 SO libspdk_bdev_nvme.so.6.0 00:02:25.625 SYMLINK libspdk_bdev_nvme.so 00:02:26.193 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:26.193 CC module/event/subsystems/sock/sock.o 00:02:26.193 CC module/event/subsystems/iobuf/iobuf.o 00:02:26.193 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:26.193 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:26.193 CC module/event/subsystems/vmd/vmd.o 00:02:26.193 CC module/event/subsystems/scheduler/scheduler.o 00:02:26.193 LIB libspdk_event_sock.a 00:02:26.193 LIB libspdk_event_vhost_blk.a 00:02:26.193 LIB libspdk_event_scheduler.a 00:02:26.193 SO libspdk_event_sock.so.4.0 00:02:26.193 LIB libspdk_event_vmd.a 00:02:26.193 LIB libspdk_event_iobuf.a 00:02:26.193 SO libspdk_event_vhost_blk.so.2.0 00:02:26.193 SO libspdk_event_scheduler.so.3.0 00:02:26.193 SO libspdk_event_iobuf.so.2.0 00:02:26.193 SO libspdk_event_vmd.so.5.0 00:02:26.193 SYMLINK libspdk_event_sock.so 00:02:26.452 SYMLINK libspdk_event_scheduler.so 00:02:26.452 SYMLINK libspdk_event_vhost_blk.so 00:02:26.452 SYMLINK libspdk_event_iobuf.so 00:02:26.452 SYMLINK libspdk_event_vmd.so 00:02:26.714 CC module/event/subsystems/accel/accel.o 00:02:26.714 LIB libspdk_event_accel.a 00:02:26.714 SO libspdk_event_accel.so.5.0 00:02:26.974 SYMLINK libspdk_event_accel.so 00:02:26.974 CC module/event/subsystems/bdev/bdev.o 00:02:27.233 LIB libspdk_event_bdev.a 00:02:27.233 SO libspdk_event_bdev.so.5.0 00:02:27.233 SYMLINK libspdk_event_bdev.so 00:02:27.493 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.493 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:27.493 CC module/event/subsystems/ublk/ublk.o 00:02:27.493 CC module/event/subsystems/nbd/nbd.o 00:02:27.493 CC module/event/subsystems/scsi/scsi.o 00:02:27.753 LIB libspdk_event_ublk.a 00:02:27.753 LIB libspdk_event_scsi.a 00:02:27.753 LIB libspdk_event_nbd.a 00:02:27.753 SO libspdk_event_ublk.so.2.0 00:02:27.753 LIB libspdk_event_nvmf.a 00:02:27.753 SO libspdk_event_scsi.so.5.0 00:02:27.753 SO libspdk_event_nbd.so.5.0 00:02:27.753 SO libspdk_event_nvmf.so.5.0 00:02:27.753 SYMLINK libspdk_event_ublk.so 00:02:27.753 SYMLINK libspdk_event_scsi.so 00:02:27.753 SYMLINK libspdk_event_nbd.so 00:02:27.753 SYMLINK libspdk_event_nvmf.so 00:02:28.012 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.012 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.271 LIB libspdk_event_vhost_scsi.a 00:02:28.271 LIB libspdk_event_iscsi.a 00:02:28.271 SO libspdk_event_vhost_scsi.so.2.0 00:02:28.271 SO libspdk_event_iscsi.so.5.0 00:02:28.271 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.271 SYMLINK libspdk_event_iscsi.so 00:02:28.530 SO libspdk.so.5.0 00:02:28.530 SYMLINK libspdk.so 00:02:28.801 TEST_HEADER include/spdk/accel.h 00:02:28.801 TEST_HEADER include/spdk/accel_module.h 00:02:28.801 TEST_HEADER include/spdk/assert.h 00:02:28.801 TEST_HEADER include/spdk/base64.h 00:02:28.801 TEST_HEADER include/spdk/bdev_module.h 00:02:28.801 TEST_HEADER include/spdk/bdev.h 00:02:28.801 TEST_HEADER include/spdk/barrier.h 00:02:28.801 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.801 CC test/rpc_client/rpc_client_test.o 00:02:28.801 TEST_HEADER include/spdk/bit_pool.h 00:02:28.801 TEST_HEADER include/spdk/bit_array.h 00:02:28.801 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.801 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.801 TEST_HEADER include/spdk/blobfs.h 00:02:28.801 TEST_HEADER include/spdk/conf.h 00:02:28.801 TEST_HEADER include/spdk/blob.h 00:02:28.801 TEST_HEADER include/spdk/crc16.h 00:02:28.801 TEST_HEADER include/spdk/cpuset.h 00:02:28.801 TEST_HEADER include/spdk/crc32.h 00:02:28.801 TEST_HEADER include/spdk/config.h 00:02:28.801 TEST_HEADER include/spdk/crc64.h 00:02:28.801 TEST_HEADER include/spdk/dif.h 00:02:28.801 TEST_HEADER include/spdk/endian.h 00:02:28.801 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.801 TEST_HEADER include/spdk/env.h 00:02:28.801 TEST_HEADER include/spdk/dma.h 00:02:28.801 TEST_HEADER include/spdk/event.h 00:02:28.801 TEST_HEADER include/spdk/fd_group.h 00:02:28.801 TEST_HEADER include/spdk/fd.h 00:02:28.801 TEST_HEADER include/spdk/file.h 00:02:28.801 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.801 TEST_HEADER include/spdk/ftl.h 00:02:28.801 TEST_HEADER include/spdk/hexlify.h 00:02:28.801 TEST_HEADER include/spdk/histogram_data.h 00:02:28.801 TEST_HEADER include/spdk/idxd.h 00:02:28.802 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.802 TEST_HEADER include/spdk/init.h 00:02:28.802 TEST_HEADER include/spdk/ioat.h 00:02:28.802 CC app/trace_record/trace_record.o 00:02:28.802 CC app/spdk_nvme_perf/perf.o 00:02:28.802 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.802 TEST_HEADER include/spdk/json.h 00:02:28.802 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.802 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.802 TEST_HEADER include/spdk/likely.h 00:02:28.802 CXX app/trace/trace.o 00:02:28.802 TEST_HEADER include/spdk/log.h 00:02:28.802 TEST_HEADER include/spdk/lvol.h 00:02:28.802 TEST_HEADER include/spdk/mmio.h 00:02:28.802 TEST_HEADER include/spdk/memory.h 00:02:28.802 TEST_HEADER include/spdk/nbd.h 00:02:28.802 CC app/spdk_nvme_identify/identify.o 00:02:28.802 CC app/spdk_lspci/spdk_lspci.o 00:02:28.802 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.802 TEST_HEADER include/spdk/nvme.h 00:02:28.802 TEST_HEADER include/spdk/notify.h 00:02:28.802 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.802 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.802 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.802 CC app/spdk_top/spdk_top.o 00:02:28.802 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.802 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.802 TEST_HEADER include/spdk/nvmf.h 00:02:28.802 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.802 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.802 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.802 TEST_HEADER include/spdk/opal_spec.h 00:02:28.802 TEST_HEADER include/spdk/opal.h 00:02:28.802 TEST_HEADER include/spdk/pci_ids.h 00:02:28.802 TEST_HEADER include/spdk/pipe.h 00:02:28.802 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.802 TEST_HEADER include/spdk/queue.h 00:02:28.802 TEST_HEADER include/spdk/reduce.h 00:02:28.802 TEST_HEADER include/spdk/rpc.h 00:02:28.802 TEST_HEADER include/spdk/scheduler.h 00:02:28.802 TEST_HEADER include/spdk/scsi.h 00:02:28.802 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.802 TEST_HEADER include/spdk/sock.h 00:02:28.802 TEST_HEADER include/spdk/stdinc.h 00:02:28.802 TEST_HEADER include/spdk/string.h 00:02:28.802 TEST_HEADER include/spdk/thread.h 00:02:28.802 TEST_HEADER include/spdk/trace.h 00:02:28.802 TEST_HEADER include/spdk/tree.h 00:02:28.802 TEST_HEADER include/spdk/trace_parser.h 00:02:28.802 TEST_HEADER include/spdk/ublk.h 00:02:28.802 TEST_HEADER include/spdk/uuid.h 00:02:28.802 TEST_HEADER include/spdk/util.h 00:02:28.802 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.802 TEST_HEADER include/spdk/version.h 00:02:28.802 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.802 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.802 TEST_HEADER include/spdk/vhost.h 00:02:28.802 TEST_HEADER include/spdk/vmd.h 00:02:28.802 TEST_HEADER include/spdk/xor.h 00:02:28.802 CXX test/cpp_headers/accel.o 00:02:28.802 TEST_HEADER include/spdk/zipf.h 00:02:28.802 CXX test/cpp_headers/accel_module.o 00:02:28.802 CXX test/cpp_headers/assert.o 00:02:28.802 CXX test/cpp_headers/barrier.o 00:02:28.802 CXX test/cpp_headers/base64.o 00:02:28.802 CXX test/cpp_headers/bdev.o 00:02:28.802 CXX test/cpp_headers/bdev_module.o 00:02:28.802 CXX test/cpp_headers/bit_array.o 00:02:28.802 CXX test/cpp_headers/bdev_zone.o 00:02:28.802 CXX test/cpp_headers/bit_pool.o 00:02:28.802 CXX test/cpp_headers/blob_bdev.o 00:02:28.802 CXX test/cpp_headers/blobfs.o 00:02:28.802 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.802 CXX test/cpp_headers/blob.o 00:02:28.802 CXX test/cpp_headers/conf.o 00:02:28.802 CXX test/cpp_headers/config.o 00:02:28.802 CXX test/cpp_headers/cpuset.o 00:02:28.802 CC app/spdk_dd/spdk_dd.o 00:02:28.802 CXX test/cpp_headers/crc16.o 00:02:28.802 CXX test/cpp_headers/crc32.o 00:02:28.802 CXX test/cpp_headers/crc64.o 00:02:28.802 CXX test/cpp_headers/dif.o 00:02:28.802 CXX test/cpp_headers/dma.o 00:02:28.802 CXX test/cpp_headers/endian.o 00:02:28.802 CXX test/cpp_headers/env.o 00:02:28.802 CXX test/cpp_headers/env_dpdk.o 00:02:28.802 CXX test/cpp_headers/event.o 00:02:28.802 CXX test/cpp_headers/fd_group.o 00:02:28.802 CXX test/cpp_headers/fd.o 00:02:28.802 CXX test/cpp_headers/file.o 00:02:28.802 CXX test/cpp_headers/ftl.o 00:02:28.802 CXX test/cpp_headers/gpt_spec.o 00:02:28.802 CXX test/cpp_headers/hexlify.o 00:02:28.802 CXX test/cpp_headers/histogram_data.o 00:02:28.802 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.802 CXX test/cpp_headers/idxd.o 00:02:28.802 CXX test/cpp_headers/idxd_spec.o 00:02:28.802 CXX test/cpp_headers/init.o 00:02:28.802 CC app/vhost/vhost.o 00:02:28.802 CXX test/cpp_headers/ioat.o 00:02:28.802 CC app/nvmf_tgt/nvmf_main.o 00:02:28.802 CC app/spdk_tgt/spdk_tgt.o 00:02:28.802 CC test/app/jsoncat/jsoncat.o 00:02:28.802 CC test/app/histogram_perf/histogram_perf.o 00:02:28.802 CC test/nvme/aer/aer.o 00:02:28.802 CC test/app/stub/stub.o 00:02:28.802 CC test/nvme/overhead/overhead.o 00:02:28.802 CC test/nvme/reset/reset.o 00:02:28.802 CC test/nvme/sgl/sgl.o 00:02:28.802 CC test/nvme/err_injection/err_injection.o 00:02:28.802 CC test/thread/poller_perf/poller_perf.o 00:02:28.802 CC test/nvme/compliance/nvme_compliance.o 00:02:28.802 CC test/nvme/e2edp/nvme_dp.o 00:02:28.802 CC test/nvme/startup/startup.o 00:02:28.802 CC test/event/reactor_perf/reactor_perf.o 00:02:28.802 CC test/env/memory/memory_ut.o 00:02:28.802 CC test/nvme/boot_partition/boot_partition.o 00:02:28.802 CC test/nvme/connect_stress/connect_stress.o 00:02:28.802 CC test/nvme/reserve/reserve.o 00:02:28.802 CC examples/vmd/led/led.o 00:02:28.802 CC test/nvme/simple_copy/simple_copy.o 00:02:28.802 CC test/event/reactor/reactor.o 00:02:28.802 CC test/env/vtophys/vtophys.o 00:02:28.802 CC test/nvme/cuse/cuse.o 00:02:28.802 CC test/env/pci/pci_ut.o 00:02:28.802 CC test/accel/dif/dif.o 00:02:28.802 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.802 CC test/event/event_perf/event_perf.o 00:02:28.802 CC test/event/app_repeat/app_repeat.o 00:02:28.802 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.802 CC test/app/bdev_svc/bdev_svc.o 00:02:28.802 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.802 CC test/bdev/bdevio/bdevio.o 00:02:29.075 CC test/nvme/fdp/fdp.o 00:02:29.075 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:29.075 CC examples/ioat/verify/verify.o 00:02:29.075 CC test/blobfs/mkfs/mkfs.o 00:02:29.075 CC examples/ioat/perf/perf.o 00:02:29.075 CC examples/util/zipf/zipf.o 00:02:29.075 CC test/dma/test_dma/test_dma.o 00:02:29.075 CC examples/sock/hello_world/hello_sock.o 00:02:29.075 CC examples/nvme/hotplug/hotplug.o 00:02:29.075 CC examples/nvme/hello_world/hello_world.o 00:02:29.075 CC examples/nvme/abort/abort.o 00:02:29.075 CC examples/accel/perf/accel_perf.o 00:02:29.075 CC app/fio/nvme/fio_plugin.o 00:02:29.075 CC examples/idxd/perf/perf.o 00:02:29.075 CC examples/nvme/reconnect/reconnect.o 00:02:29.075 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.075 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.075 CC examples/nvme/arbitration/arbitration.o 00:02:29.075 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.075 CC examples/blob/hello_world/hello_blob.o 00:02:29.075 CC test/event/scheduler/scheduler.o 00:02:29.075 CC examples/nvmf/nvmf/nvmf.o 00:02:29.075 CC examples/blob/cli/blobcli.o 00:02:29.075 CC examples/bdev/hello_world/hello_bdev.o 00:02:29.075 CC app/fio/bdev/fio_plugin.o 00:02:29.075 CC examples/thread/thread/thread_ex.o 00:02:29.075 CC examples/bdev/bdevperf/bdevperf.o 00:02:29.075 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.075 CC test/lvol/esnap/esnap.o 00:02:29.075 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.075 LINK spdk_lspci 00:02:29.075 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.341 LINK rpc_client_test 00:02:29.341 LINK jsoncat 00:02:29.341 LINK interrupt_tgt 00:02:29.341 LINK histogram_perf 00:02:29.341 LINK spdk_nvme_discover 00:02:29.341 LINK reactor_perf 00:02:29.341 LINK lsvmd 00:02:29.341 LINK event_perf 00:02:29.341 LINK vtophys 00:02:29.341 LINK nvmf_tgt 00:02:29.341 LINK reactor 00:02:29.341 LINK stub 00:02:29.341 LINK led 00:02:29.341 LINK iscsi_tgt 00:02:29.341 LINK boot_partition 00:02:29.341 LINK app_repeat 00:02:29.341 LINK poller_perf 00:02:29.341 LINK spdk_trace_record 00:02:29.341 LINK startup 00:02:29.341 LINK connect_stress 00:02:29.341 LINK bdev_svc 00:02:29.341 LINK vhost 00:02:29.341 LINK zipf 00:02:29.341 LINK err_injection 00:02:29.341 CXX test/cpp_headers/ioat_spec.o 00:02:29.341 LINK env_dpdk_post_init 00:02:29.605 CXX test/cpp_headers/iscsi_spec.o 00:02:29.605 LINK doorbell_aers 00:02:29.605 CXX test/cpp_headers/json.o 00:02:29.605 LINK spdk_tgt 00:02:29.605 CXX test/cpp_headers/jsonrpc.o 00:02:29.605 LINK pmr_persistence 00:02:29.605 CXX test/cpp_headers/likely.o 00:02:29.605 CXX test/cpp_headers/log.o 00:02:29.605 LINK mkfs 00:02:29.605 CXX test/cpp_headers/lvol.o 00:02:29.605 CXX test/cpp_headers/memory.o 00:02:29.605 LINK cmb_copy 00:02:29.605 CXX test/cpp_headers/mmio.o 00:02:29.605 LINK fused_ordering 00:02:29.605 CXX test/cpp_headers/nbd.o 00:02:29.605 CXX test/cpp_headers/notify.o 00:02:29.605 CXX test/cpp_headers/nvme.o 00:02:29.605 CXX test/cpp_headers/nvme_intel.o 00:02:29.605 CXX test/cpp_headers/nvme_ocssd.o 00:02:29.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:29.605 CXX test/cpp_headers/nvme_spec.o 00:02:29.605 CXX test/cpp_headers/nvme_zns.o 00:02:29.605 LINK ioat_perf 00:02:29.605 LINK simple_copy 00:02:29.605 CXX test/cpp_headers/nvmf_cmd.o 00:02:29.605 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:29.605 CXX test/cpp_headers/nvmf.o 00:02:29.605 CXX test/cpp_headers/nvmf_spec.o 00:02:29.605 LINK reserve 00:02:29.605 CXX test/cpp_headers/nvmf_transport.o 00:02:29.605 CXX test/cpp_headers/opal.o 00:02:29.605 CXX test/cpp_headers/opal_spec.o 00:02:29.605 CXX test/cpp_headers/pipe.o 00:02:29.605 CXX test/cpp_headers/pci_ids.o 00:02:29.605 CXX test/cpp_headers/queue.o 00:02:29.605 CXX test/cpp_headers/reduce.o 00:02:29.605 CXX test/cpp_headers/rpc.o 00:02:29.605 CXX test/cpp_headers/scheduler.o 00:02:29.605 CXX test/cpp_headers/scsi.o 00:02:29.605 CXX test/cpp_headers/scsi_spec.o 00:02:29.605 CXX test/cpp_headers/sock.o 00:02:29.605 LINK hello_sock 00:02:29.605 CXX test/cpp_headers/stdinc.o 00:02:29.605 CXX test/cpp_headers/string.o 00:02:29.605 LINK verify 00:02:29.605 LINK hello_world 00:02:29.605 CXX test/cpp_headers/thread.o 00:02:29.605 LINK reset 00:02:29.605 LINK overhead 00:02:29.605 LINK nvme_dp 00:02:29.605 CXX test/cpp_headers/trace.o 00:02:29.605 LINK sgl 00:02:29.605 LINK hotplug 00:02:29.605 CXX test/cpp_headers/trace_parser.o 00:02:29.605 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.605 LINK aer 00:02:29.605 LINK nvme_compliance 00:02:29.605 LINK scheduler 00:02:29.605 LINK hello_bdev 00:02:29.605 LINK hello_blob 00:02:29.605 CXX test/cpp_headers/tree.o 00:02:29.605 LINK spdk_dd 00:02:29.605 LINK thread 00:02:29.605 CXX test/cpp_headers/ublk.o 00:02:29.605 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.605 LINK arbitration 00:02:29.605 CXX test/cpp_headers/util.o 00:02:29.605 LINK fdp 00:02:29.867 LINK abort 00:02:29.867 CXX test/cpp_headers/uuid.o 00:02:29.867 CXX test/cpp_headers/version.o 00:02:29.867 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.867 LINK dif 00:02:29.867 LINK nvmf 00:02:29.867 LINK reconnect 00:02:29.867 CXX test/cpp_headers/vhost.o 00:02:29.867 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.867 CXX test/cpp_headers/vmd.o 00:02:29.867 CXX test/cpp_headers/xor.o 00:02:29.867 CXX test/cpp_headers/zipf.o 00:02:29.867 LINK idxd_perf 00:02:29.867 LINK bdevio 00:02:29.867 LINK pci_ut 00:02:29.867 LINK spdk_trace 00:02:29.867 LINK test_dma 00:02:29.867 LINK accel_perf 00:02:29.867 LINK blobcli 00:02:30.125 LINK nvme_manage 00:02:30.125 LINK nvme_fuzz 00:02:30.125 LINK spdk_nvme 00:02:30.125 LINK spdk_bdev 00:02:30.125 LINK mem_callbacks 00:02:30.125 LINK spdk_nvme_identify 00:02:30.125 LINK spdk_nvme_perf 00:02:30.125 LINK memory_ut 00:02:30.384 LINK spdk_top 00:02:30.384 LINK cuse 00:02:30.384 LINK vhost_fuzz 00:02:30.384 LINK bdevperf 00:02:30.953 LINK iscsi_fuzz 00:02:32.861 LINK esnap 00:02:32.861 00:02:32.861 real 0m44.559s 00:02:32.861 user 6m12.311s 00:02:32.861 sys 3m43.329s 00:02:32.861 17:11:52 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:32.861 17:11:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.861 ************************************ 00:02:32.861 END TEST make 00:02:32.861 ************************************ 00:02:33.121 17:11:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:33.121 17:11:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:33.121 17:11:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:33.121 17:11:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:33.121 17:11:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:33.121 17:11:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:33.121 17:11:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:33.121 17:11:52 -- scripts/common.sh@335 -- # IFS=.-: 00:02:33.121 17:11:52 -- scripts/common.sh@335 -- # read -ra ver1 00:02:33.121 17:11:52 -- scripts/common.sh@336 -- # IFS=.-: 00:02:33.121 17:11:52 -- scripts/common.sh@336 -- # read -ra ver2 00:02:33.121 17:11:52 -- scripts/common.sh@337 -- # local 'op=<' 00:02:33.121 17:11:52 -- scripts/common.sh@339 -- # ver1_l=2 00:02:33.121 17:11:52 -- scripts/common.sh@340 -- # ver2_l=1 00:02:33.121 17:11:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:33.121 17:11:52 -- scripts/common.sh@343 -- # case "$op" in 00:02:33.121 17:11:52 -- scripts/common.sh@344 -- # : 1 00:02:33.121 17:11:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:33.121 17:11:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.121 17:11:52 -- scripts/common.sh@364 -- # decimal 1 00:02:33.121 17:11:52 -- scripts/common.sh@352 -- # local d=1 00:02:33.121 17:11:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:33.121 17:11:52 -- scripts/common.sh@354 -- # echo 1 00:02:33.121 17:11:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:33.121 17:11:52 -- scripts/common.sh@365 -- # decimal 2 00:02:33.121 17:11:52 -- scripts/common.sh@352 -- # local d=2 00:02:33.121 17:11:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:33.121 17:11:52 -- scripts/common.sh@354 -- # echo 2 00:02:33.121 17:11:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:33.121 17:11:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:33.121 17:11:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:33.121 17:11:52 -- scripts/common.sh@367 -- # return 0 00:02:33.121 17:11:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:33.121 17:11:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:33.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.121 --rc genhtml_branch_coverage=1 00:02:33.121 --rc genhtml_function_coverage=1 00:02:33.121 --rc genhtml_legend=1 00:02:33.121 --rc geninfo_all_blocks=1 00:02:33.121 --rc geninfo_unexecuted_blocks=1 00:02:33.121 00:02:33.121 ' 00:02:33.121 17:11:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:33.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.121 --rc genhtml_branch_coverage=1 00:02:33.121 --rc genhtml_function_coverage=1 00:02:33.121 --rc genhtml_legend=1 00:02:33.121 --rc geninfo_all_blocks=1 00:02:33.121 --rc geninfo_unexecuted_blocks=1 00:02:33.121 00:02:33.121 ' 00:02:33.121 17:11:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:33.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.121 --rc genhtml_branch_coverage=1 00:02:33.121 --rc genhtml_function_coverage=1 00:02:33.121 --rc genhtml_legend=1 00:02:33.121 --rc geninfo_all_blocks=1 00:02:33.121 --rc geninfo_unexecuted_blocks=1 00:02:33.121 00:02:33.121 ' 00:02:33.121 17:11:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:33.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.121 --rc genhtml_branch_coverage=1 00:02:33.121 --rc genhtml_function_coverage=1 00:02:33.121 --rc genhtml_legend=1 00:02:33.121 --rc geninfo_all_blocks=1 00:02:33.121 --rc geninfo_unexecuted_blocks=1 00:02:33.121 00:02:33.121 ' 00:02:33.121 17:11:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.121 17:11:52 -- nvmf/common.sh@7 -- # uname -s 00:02:33.121 17:11:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.121 17:11:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.121 17:11:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.121 17:11:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.121 17:11:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.121 17:11:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.121 17:11:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.121 17:11:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.121 17:11:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.121 17:11:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.121 17:11:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:33.121 17:11:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:33.121 17:11:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.121 17:11:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.121 17:11:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.121 17:11:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:33.121 17:11:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.122 17:11:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.122 17:11:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.122 17:11:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.122 17:11:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.122 17:11:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.122 17:11:52 -- paths/export.sh@5 -- # export PATH 00:02:33.122 17:11:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.122 17:11:52 -- nvmf/common.sh@46 -- # : 0 00:02:33.122 17:11:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:33.122 17:11:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:33.122 17:11:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:33.122 17:11:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.122 17:11:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.122 17:11:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:33.122 17:11:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:33.122 17:11:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:33.122 17:11:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.122 17:11:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.122 17:11:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.122 17:11:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.122 17:11:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:33.122 17:11:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.122 17:11:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:33.122 17:11:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.122 17:11:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.122 17:11:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.122 17:11:52 -- spdk/autotest.sh@48 -- # udevadm_pid=2465095 00:02:33.122 17:11:52 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:33.122 17:11:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.122 17:11:52 -- spdk/autotest.sh@54 -- # echo 2465097 00:02:33.122 17:11:52 -- spdk/autotest.sh@56 -- # echo 2465098 00:02:33.122 17:11:52 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:33.122 17:11:52 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:33.122 17:11:52 -- spdk/autotest.sh@60 -- # echo 2465099 00:02:33.122 17:11:52 -- spdk/autotest.sh@62 -- # echo 2465100 00:02:33.122 17:11:52 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:33.122 17:11:52 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.122 17:11:52 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:33.122 17:11:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.122 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:02:33.122 17:11:52 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:33.122 17:11:52 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:33.381 17:11:52 -- spdk/autotest.sh@70 -- # create_test_list 00:02:33.381 17:11:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:33.381 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:02:33.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:33.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:33.381 17:11:52 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:33.381 17:11:52 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:33.381 17:11:52 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:33.381 17:11:52 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:33.381 17:11:52 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:33.381 17:11:52 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:33.381 17:11:52 -- common/autotest_common.sh@1450 -- # uname 00:02:33.381 17:11:52 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:33.381 17:11:52 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:33.381 17:11:52 -- common/autotest_common.sh@1470 -- # uname 00:02:33.381 17:11:52 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:33.381 17:11:52 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:33.381 17:11:52 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:33.381 lcov: LCOV version 1.15 00:02:33.381 17:11:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:45.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:45.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:45.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:45.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:45.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:45.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:57.786 17:12:15 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:57.786 17:12:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:57.786 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:02:57.786 17:12:15 -- spdk/autotest.sh@89 -- # rm -f 00:02:57.786 17:12:15 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.163 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.423 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.682 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:59.682 17:12:19 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:59.682 17:12:19 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:59.682 17:12:19 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:59.682 17:12:19 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:59.682 17:12:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:59.682 17:12:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:59.682 17:12:19 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:59.682 17:12:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.682 17:12:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:59.682 17:12:19 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:59.682 17:12:19 -- spdk/autotest.sh@108 -- # grep -v p 00:02:59.682 17:12:19 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:02:59.682 17:12:19 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:59.682 17:12:19 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:59.682 17:12:19 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:59.682 17:12:19 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:59.682 17:12:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:59.682 No valid GPT data, bailing 00:02:59.682 17:12:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.682 17:12:19 -- scripts/common.sh@393 -- # pt= 00:02:59.682 17:12:19 -- scripts/common.sh@394 -- # return 1 00:02:59.682 17:12:19 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:59.941 1+0 records in 00:02:59.941 1+0 records out 00:02:59.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744246 s, 141 MB/s 00:02:59.941 17:12:19 -- spdk/autotest.sh@116 -- # sync 00:02:59.941 17:12:19 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:59.941 17:12:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:59.941 17:12:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.138 17:12:26 -- spdk/autotest.sh@122 -- # uname -s 00:03:08.138 17:12:26 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:08.138 17:12:26 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.138 17:12:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.138 17:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.138 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:03:08.138 ************************************ 00:03:08.138 START TEST setup.sh 00:03:08.138 ************************************ 00:03:08.138 17:12:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.138 * Looking for test storage... 00:03:08.138 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:08.138 17:12:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.138 17:12:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.138 17:12:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.138 17:12:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.138 17:12:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.138 17:12:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.138 17:12:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.138 17:12:26 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.138 17:12:26 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.138 17:12:26 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.138 17:12:26 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.138 17:12:26 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.138 17:12:26 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.138 17:12:26 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.138 17:12:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.138 17:12:26 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.138 17:12:26 -- scripts/common.sh@344 -- # : 1 00:03:08.138 17:12:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.138 17:12:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.138 17:12:26 -- scripts/common.sh@364 -- # decimal 1 00:03:08.138 17:12:26 -- scripts/common.sh@352 -- # local d=1 00:03:08.138 17:12:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.138 17:12:26 -- scripts/common.sh@354 -- # echo 1 00:03:08.138 17:12:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.138 17:12:26 -- scripts/common.sh@365 -- # decimal 2 00:03:08.138 17:12:26 -- scripts/common.sh@352 -- # local d=2 00:03:08.138 17:12:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.138 17:12:26 -- scripts/common.sh@354 -- # echo 2 00:03:08.139 17:12:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.139 17:12:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.139 17:12:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.139 17:12:26 -- scripts/common.sh@367 -- # return 0 00:03:08.139 17:12:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- setup/test-setup.sh@10 -- # uname -s 00:03:08.139 17:12:26 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:08.139 17:12:26 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:08.139 17:12:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.139 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:03:08.139 ************************************ 00:03:08.139 START TEST acl 00:03:08.139 ************************************ 00:03:08.139 17:12:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:08.139 * Looking for test storage... 00:03:08.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:08.139 17:12:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.139 17:12:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.139 17:12:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.139 17:12:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.139 17:12:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.139 17:12:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.139 17:12:26 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.139 17:12:26 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.139 17:12:26 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.139 17:12:26 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.139 17:12:26 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.139 17:12:26 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.139 17:12:26 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.139 17:12:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.139 17:12:26 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.139 17:12:26 -- scripts/common.sh@344 -- # : 1 00:03:08.139 17:12:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.139 17:12:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.139 17:12:26 -- scripts/common.sh@364 -- # decimal 1 00:03:08.139 17:12:26 -- scripts/common.sh@352 -- # local d=1 00:03:08.139 17:12:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.139 17:12:26 -- scripts/common.sh@354 -- # echo 1 00:03:08.139 17:12:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.139 17:12:26 -- scripts/common.sh@365 -- # decimal 2 00:03:08.139 17:12:26 -- scripts/common.sh@352 -- # local d=2 00:03:08.139 17:12:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.139 17:12:26 -- scripts/common.sh@354 -- # echo 2 00:03:08.139 17:12:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.139 17:12:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.139 17:12:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.139 17:12:26 -- scripts/common.sh@367 -- # return 0 00:03:08.139 17:12:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.139 --rc genhtml_branch_coverage=1 00:03:08.139 --rc genhtml_function_coverage=1 00:03:08.139 --rc genhtml_legend=1 00:03:08.139 --rc geninfo_all_blocks=1 00:03:08.139 --rc geninfo_unexecuted_blocks=1 00:03:08.139 00:03:08.139 ' 00:03:08.139 17:12:26 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:08.139 17:12:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:08.139 17:12:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:08.139 17:12:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:08.139 17:12:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:08.139 17:12:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:08.139 17:12:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:08.139 17:12:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.139 17:12:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:08.139 17:12:26 -- setup/acl.sh@12 -- # devs=() 00:03:08.139 17:12:26 -- setup/acl.sh@12 -- # declare -a devs 00:03:08.139 17:12:26 -- setup/acl.sh@13 -- # drivers=() 00:03:08.139 17:12:26 -- setup/acl.sh@13 -- # declare -A drivers 00:03:08.139 17:12:26 -- setup/acl.sh@51 -- # setup reset 00:03:08.139 17:12:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.139 17:12:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.437 17:12:30 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:11.437 17:12:30 -- setup/acl.sh@16 -- # local dev driver 00:03:11.437 17:12:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.437 17:12:30 -- setup/acl.sh@15 -- # setup output status 00:03:11.437 17:12:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.437 17:12:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:14.729 Hugepages 00:03:14.729 node hugesize free / total 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 00:03:14.729 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:33 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # continue 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:14.729 17:12:34 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:14.729 17:12:34 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:14.729 17:12:34 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:14.729 17:12:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.729 17:12:34 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:14.729 17:12:34 -- setup/acl.sh@54 -- # run_test denied denied 00:03:14.729 17:12:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.729 17:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.729 17:12:34 -- common/autotest_common.sh@10 -- # set +x 00:03:14.729 ************************************ 00:03:14.729 START TEST denied 00:03:14.729 ************************************ 00:03:14.729 17:12:34 -- common/autotest_common.sh@1114 -- # denied 00:03:14.729 17:12:34 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:14.729 17:12:34 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:14.729 17:12:34 -- setup/acl.sh@38 -- # setup output config 00:03:14.729 17:12:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.729 17:12:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:18.023 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:18.023 17:12:37 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:18.023 17:12:37 -- setup/acl.sh@28 -- # local dev driver 00:03:18.023 17:12:37 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:18.023 17:12:37 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:18.023 17:12:37 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:18.023 17:12:37 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:18.023 17:12:37 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:18.023 17:12:37 -- setup/acl.sh@41 -- # setup reset 00:03:18.023 17:12:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.023 17:12:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.301 00:03:23.301 real 0m8.081s 00:03:23.301 user 0m2.498s 00:03:23.301 sys 0m4.956s 00:03:23.301 17:12:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:23.301 17:12:42 -- common/autotest_common.sh@10 -- # set +x 00:03:23.301 ************************************ 00:03:23.301 END TEST denied 00:03:23.301 ************************************ 00:03:23.301 17:12:42 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:23.301 17:12:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.301 17:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.301 17:12:42 -- common/autotest_common.sh@10 -- # set +x 00:03:23.301 ************************************ 00:03:23.301 START TEST allowed 00:03:23.301 ************************************ 00:03:23.301 17:12:42 -- common/autotest_common.sh@1114 -- # allowed 00:03:23.301 17:12:42 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:23.301 17:12:42 -- setup/acl.sh@45 -- # setup output config 00:03:23.301 17:12:42 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:23.301 17:12:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.301 17:12:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:28.573 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.573 17:12:47 -- setup/acl.sh@47 -- # verify 00:03:28.573 17:12:47 -- setup/acl.sh@28 -- # local dev driver 00:03:28.573 17:12:47 -- setup/acl.sh@48 -- # setup reset 00:03:28.573 17:12:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.573 17:12:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.862 00:03:31.862 real 0m9.024s 00:03:31.862 user 0m2.363s 00:03:31.862 sys 0m4.711s 00:03:31.862 17:12:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.862 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:31.862 ************************************ 00:03:31.862 END TEST allowed 00:03:31.862 ************************************ 00:03:31.862 00:03:31.862 real 0m24.637s 00:03:31.862 user 0m7.693s 00:03:31.862 sys 0m14.656s 00:03:31.862 17:12:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.862 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:31.862 ************************************ 00:03:31.862 END TEST acl 00:03:31.862 ************************************ 00:03:31.862 17:12:51 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.862 17:12:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.862 17:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.862 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:31.862 ************************************ 00:03:31.862 START TEST hugepages 00:03:31.862 ************************************ 00:03:31.862 17:12:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.862 * Looking for test storage... 00:03:31.862 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:31.862 17:12:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:31.862 17:12:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:31.862 17:12:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:31.862 17:12:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:31.862 17:12:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:31.862 17:12:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:31.862 17:12:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:31.862 17:12:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:31.862 17:12:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:31.862 17:12:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.862 17:12:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:31.862 17:12:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:31.862 17:12:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:31.862 17:12:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:31.862 17:12:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:31.862 17:12:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:31.862 17:12:51 -- scripts/common.sh@344 -- # : 1 00:03:31.862 17:12:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:31.862 17:12:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.862 17:12:51 -- scripts/common.sh@364 -- # decimal 1 00:03:31.862 17:12:51 -- scripts/common.sh@352 -- # local d=1 00:03:31.862 17:12:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.862 17:12:51 -- scripts/common.sh@354 -- # echo 1 00:03:31.862 17:12:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:31.862 17:12:51 -- scripts/common.sh@365 -- # decimal 2 00:03:31.862 17:12:51 -- scripts/common.sh@352 -- # local d=2 00:03:31.862 17:12:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.862 17:12:51 -- scripts/common.sh@354 -- # echo 2 00:03:31.862 17:12:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:31.862 17:12:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:31.862 17:12:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:31.862 17:12:51 -- scripts/common.sh@367 -- # return 0 00:03:31.862 17:12:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.862 17:12:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:31.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.862 --rc genhtml_branch_coverage=1 00:03:31.862 --rc genhtml_function_coverage=1 00:03:31.862 --rc genhtml_legend=1 00:03:31.862 --rc geninfo_all_blocks=1 00:03:31.862 --rc geninfo_unexecuted_blocks=1 00:03:31.862 00:03:31.862 ' 00:03:31.862 17:12:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:31.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.862 --rc genhtml_branch_coverage=1 00:03:31.862 --rc genhtml_function_coverage=1 00:03:31.862 --rc genhtml_legend=1 00:03:31.862 --rc geninfo_all_blocks=1 00:03:31.862 --rc geninfo_unexecuted_blocks=1 00:03:31.862 00:03:31.862 ' 00:03:31.862 17:12:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:31.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.862 --rc genhtml_branch_coverage=1 00:03:31.863 --rc genhtml_function_coverage=1 00:03:31.863 --rc genhtml_legend=1 00:03:31.863 --rc geninfo_all_blocks=1 00:03:31.863 --rc geninfo_unexecuted_blocks=1 00:03:31.863 00:03:31.863 ' 00:03:31.863 17:12:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:31.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.863 --rc genhtml_branch_coverage=1 00:03:31.863 --rc genhtml_function_coverage=1 00:03:31.863 --rc genhtml_legend=1 00:03:31.863 --rc geninfo_all_blocks=1 00:03:31.863 --rc geninfo_unexecuted_blocks=1 00:03:31.863 00:03:31.863 ' 00:03:31.863 17:12:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.863 17:12:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.863 17:12:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.863 17:12:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.863 17:12:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.863 17:12:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.863 17:12:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.863 17:12:51 -- setup/common.sh@18 -- # local node= 00:03:31.863 17:12:51 -- setup/common.sh@19 -- # local var val 00:03:31.863 17:12:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.863 17:12:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.863 17:12:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.863 17:12:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.863 17:12:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.863 17:12:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 41053084 kB' 'MemAvailable: 44790684 kB' 'Buffers: 4100 kB' 'Cached: 10586760 kB' 'SwapCached: 0 kB' 'Active: 7423612 kB' 'Inactive: 3704368 kB' 'Active(anon): 7025332 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540604 kB' 'Mapped: 158980 kB' 'Shmem: 6488212 kB' 'KReclaimable: 263636 kB' 'Slab: 1234540 kB' 'SReclaimable: 263636 kB' 'SUnreclaim: 970904 kB' 'KernelStack: 22064 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433352 kB' 'Committed_AS: 8213748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217792 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.863 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.863 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:12:51 -- setup/common.sh@32 -- # continue 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # continue 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.123 17:12:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.123 17:12:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.123 17:12:51 -- setup/common.sh@33 -- # echo 2048 00:03:32.123 17:12:51 -- setup/common.sh@33 -- # return 0 00:03:32.123 17:12:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.123 17:12:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.123 17:12:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.123 17:12:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.123 17:12:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.123 17:12:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.123 17:12:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.123 17:12:51 -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.123 17:12:51 -- setup/hugepages.sh@27 -- # local node 00:03:32.123 17:12:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.123 17:12:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.123 17:12:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.123 17:12:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.123 17:12:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.123 17:12:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.123 17:12:51 -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.123 17:12:51 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.123 17:12:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.123 17:12:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.123 17:12:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.123 17:12:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.123 17:12:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.123 17:12:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.123 17:12:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.123 17:12:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.123 17:12:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.123 17:12:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.123 17:12:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.123 17:12:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.123 17:12:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.123 17:12:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.123 17:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.123 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:32.123 ************************************ 00:03:32.123 START TEST default_setup 00:03:32.123 ************************************ 00:03:32.124 17:12:51 -- common/autotest_common.sh@1114 -- # default_setup 00:03:32.124 17:12:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.124 17:12:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.124 17:12:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.124 17:12:51 -- setup/hugepages.sh@51 -- # shift 00:03:32.124 17:12:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.124 17:12:51 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.124 17:12:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.124 17:12:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.124 17:12:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.124 17:12:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.124 17:12:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.124 17:12:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.124 17:12:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.124 17:12:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.124 17:12:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.124 17:12:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.124 17:12:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.124 17:12:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.124 17:12:51 -- setup/hugepages.sh@73 -- # return 0 00:03:32.124 17:12:51 -- setup/hugepages.sh@137 -- # setup output 00:03:32.124 17:12:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.124 17:12:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:35.437 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.437 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.696 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.696 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.696 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.696 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.604 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.604 17:12:57 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.604 17:12:57 -- setup/hugepages.sh@89 -- # local node 00:03:37.604 17:12:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.604 17:12:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.604 17:12:57 -- setup/hugepages.sh@92 -- # local surp 00:03:37.604 17:12:57 -- setup/hugepages.sh@93 -- # local resv 00:03:37.604 17:12:57 -- setup/hugepages.sh@94 -- # local anon 00:03:37.604 17:12:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.604 17:12:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.604 17:12:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.604 17:12:57 -- setup/common.sh@18 -- # local node= 00:03:37.604 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.604 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.604 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.604 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.604 17:12:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.604 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.604 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43251188 kB' 'MemAvailable: 46988724 kB' 'Buffers: 4100 kB' 'Cached: 10586892 kB' 'SwapCached: 0 kB' 'Active: 7421920 kB' 'Inactive: 3704368 kB' 'Active(anon): 7023640 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538384 kB' 'Mapped: 158604 kB' 'Shmem: 6488344 kB' 'KReclaimable: 263508 kB' 'Slab: 1233508 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 970000 kB' 'KernelStack: 22160 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8212872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.604 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.604 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.605 17:12:57 -- setup/common.sh@33 -- # echo 0 00:03:37.605 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.605 17:12:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.605 17:12:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.605 17:12:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.605 17:12:57 -- setup/common.sh@18 -- # local node= 00:03:37.605 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.605 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.605 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.605 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.605 17:12:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.605 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.605 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43250748 kB' 'MemAvailable: 46988284 kB' 'Buffers: 4100 kB' 'Cached: 10586892 kB' 'SwapCached: 0 kB' 'Active: 7426336 kB' 'Inactive: 3704368 kB' 'Active(anon): 7028056 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542744 kB' 'Mapped: 158632 kB' 'Shmem: 6488344 kB' 'KReclaimable: 263508 kB' 'Slab: 1233496 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969988 kB' 'KernelStack: 22128 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8216724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217952 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.605 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.605 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.606 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.606 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.606 17:12:57 -- setup/common.sh@33 -- # echo 0 00:03:37.606 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.606 17:12:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.868 17:12:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.868 17:12:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.868 17:12:57 -- setup/common.sh@18 -- # local node= 00:03:37.868 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.868 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.868 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.868 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.868 17:12:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.868 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.868 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.868 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43249460 kB' 'MemAvailable: 46986996 kB' 'Buffers: 4100 kB' 'Cached: 10586904 kB' 'SwapCached: 0 kB' 'Active: 7420656 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022376 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536892 kB' 'Mapped: 158576 kB' 'Shmem: 6488356 kB' 'KReclaimable: 263508 kB' 'Slab: 1233820 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 970312 kB' 'KernelStack: 22256 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8209104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 17:12:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 17:12:57 -- setup/common.sh@33 -- # echo 0 00:03:37.870 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.870 17:12:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.870 17:12:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.870 nr_hugepages=1024 00:03:37.870 17:12:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.870 resv_hugepages=0 00:03:37.870 17:12:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.870 surplus_hugepages=0 00:03:37.870 17:12:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.870 anon_hugepages=0 00:03:37.870 17:12:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.870 17:12:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.870 17:12:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.870 17:12:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.870 17:12:57 -- setup/common.sh@18 -- # local node= 00:03:37.870 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.870 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.870 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.870 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.870 17:12:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.870 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.870 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43248712 kB' 'MemAvailable: 46986248 kB' 'Buffers: 4100 kB' 'Cached: 10586920 kB' 'SwapCached: 0 kB' 'Active: 7420392 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022112 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536544 kB' 'Mapped: 158160 kB' 'Shmem: 6488372 kB' 'KReclaimable: 263508 kB' 'Slab: 1233828 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 970320 kB' 'KernelStack: 22176 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8209176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.870 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 17:12:57 -- setup/common.sh@33 -- # echo 1024 00:03:37.871 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.871 17:12:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.871 17:12:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.871 17:12:57 -- setup/hugepages.sh@27 -- # local node 00:03:37.871 17:12:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.871 17:12:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.871 17:12:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.871 17:12:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.871 17:12:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.871 17:12:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.871 17:12:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.871 17:12:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.871 17:12:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.871 17:12:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.871 17:12:57 -- setup/common.sh@18 -- # local node=0 00:03:37.871 17:12:57 -- setup/common.sh@19 -- # local var val 00:03:37.871 17:12:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.871 17:12:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.871 17:12:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.871 17:12:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.871 17:12:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.871 17:12:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19671280 kB' 'MemUsed: 12914088 kB' 'SwapCached: 0 kB' 'Active: 5031424 kB' 'Inactive: 3610408 kB' 'Active(anon): 4885764 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8477620 kB' 'Mapped: 53904 kB' 'AnonPages: 166996 kB' 'Shmem: 4721552 kB' 'KernelStack: 10072 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 645240 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 511956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # continue 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 17:12:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 17:12:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 17:12:57 -- setup/common.sh@33 -- # echo 0 00:03:37.872 17:12:57 -- setup/common.sh@33 -- # return 0 00:03:37.872 17:12:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.872 17:12:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.872 17:12:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.872 17:12:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.872 17:12:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.872 node0=1024 expecting 1024 00:03:37.872 17:12:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.872 00:03:37.872 real 0m5.818s 00:03:37.872 user 0m1.441s 00:03:37.872 sys 0m2.473s 00:03:37.872 17:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.872 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.872 ************************************ 00:03:37.872 END TEST default_setup 00:03:37.872 ************************************ 00:03:37.872 17:12:57 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.872 17:12:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.872 17:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.872 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:03:37.872 ************************************ 00:03:37.872 START TEST per_node_1G_alloc 00:03:37.872 ************************************ 00:03:37.872 17:12:57 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:37.872 17:12:57 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.872 17:12:57 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.872 17:12:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.872 17:12:57 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.872 17:12:57 -- setup/hugepages.sh@51 -- # shift 00:03:37.872 17:12:57 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.872 17:12:57 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.872 17:12:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.872 17:12:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.872 17:12:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.873 17:12:57 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.873 17:12:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.873 17:12:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.873 17:12:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.873 17:12:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.873 17:12:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.873 17:12:57 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.873 17:12:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.873 17:12:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.873 17:12:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.873 17:12:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.873 17:12:57 -- setup/hugepages.sh@73 -- # return 0 00:03:37.873 17:12:57 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.873 17:12:57 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.873 17:12:57 -- setup/hugepages.sh@146 -- # setup output 00:03:37.873 17:12:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.873 17:12:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:41.165 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.165 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.430 17:13:00 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:41.430 17:13:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:41.430 17:13:00 -- setup/hugepages.sh@89 -- # local node 00:03:41.430 17:13:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.430 17:13:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.430 17:13:00 -- setup/hugepages.sh@92 -- # local surp 00:03:41.430 17:13:00 -- setup/hugepages.sh@93 -- # local resv 00:03:41.430 17:13:00 -- setup/hugepages.sh@94 -- # local anon 00:03:41.430 17:13:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.430 17:13:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.430 17:13:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.430 17:13:01 -- setup/common.sh@18 -- # local node= 00:03:41.430 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.430 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.430 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.430 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.430 17:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.430 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.430 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43235420 kB' 'MemAvailable: 46972956 kB' 'Buffers: 4100 kB' 'Cached: 10587016 kB' 'SwapCached: 0 kB' 'Active: 7418200 kB' 'Inactive: 3704368 kB' 'Active(anon): 7019920 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534844 kB' 'Mapped: 157048 kB' 'Shmem: 6488468 kB' 'KReclaimable: 263508 kB' 'Slab: 1234852 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 971344 kB' 'KernelStack: 21984 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8199696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.430 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.430 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.431 17:13:01 -- setup/common.sh@33 -- # echo 0 00:03:41.431 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.431 17:13:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.431 17:13:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.431 17:13:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.431 17:13:01 -- setup/common.sh@18 -- # local node= 00:03:41.431 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.431 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.431 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.431 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.431 17:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.431 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.431 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43235560 kB' 'MemAvailable: 46973096 kB' 'Buffers: 4100 kB' 'Cached: 10587016 kB' 'SwapCached: 0 kB' 'Active: 7417964 kB' 'Inactive: 3704368 kB' 'Active(anon): 7019684 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534612 kB' 'Mapped: 157048 kB' 'Shmem: 6488468 kB' 'KReclaimable: 263508 kB' 'Slab: 1234852 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 971344 kB' 'KernelStack: 21984 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8199708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.431 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.431 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.432 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.432 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.433 17:13:01 -- setup/common.sh@33 -- # echo 0 00:03:41.433 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.433 17:13:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.433 17:13:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.433 17:13:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.433 17:13:01 -- setup/common.sh@18 -- # local node= 00:03:41.433 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.433 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.433 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.433 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.433 17:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.433 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.433 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43235560 kB' 'MemAvailable: 46973096 kB' 'Buffers: 4100 kB' 'Cached: 10587016 kB' 'SwapCached: 0 kB' 'Active: 7418048 kB' 'Inactive: 3704368 kB' 'Active(anon): 7019768 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534692 kB' 'Mapped: 157048 kB' 'Shmem: 6488468 kB' 'KReclaimable: 263508 kB' 'Slab: 1234852 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 971344 kB' 'KernelStack: 22016 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8199720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.433 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.433 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.434 17:13:01 -- setup/common.sh@33 -- # echo 0 00:03:41.434 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.434 17:13:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.434 17:13:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.434 nr_hugepages=1024 00:03:41.434 17:13:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.434 resv_hugepages=0 00:03:41.434 17:13:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.434 surplus_hugepages=0 00:03:41.434 17:13:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.434 anon_hugepages=0 00:03:41.434 17:13:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.434 17:13:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.434 17:13:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.434 17:13:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.434 17:13:01 -- setup/common.sh@18 -- # local node= 00:03:41.434 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.434 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.434 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.434 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.434 17:13:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.434 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.434 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.434 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.434 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43237092 kB' 'MemAvailable: 46974628 kB' 'Buffers: 4100 kB' 'Cached: 10587044 kB' 'SwapCached: 0 kB' 'Active: 7418020 kB' 'Inactive: 3704368 kB' 'Active(anon): 7019740 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534660 kB' 'Mapped: 157048 kB' 'Shmem: 6488496 kB' 'KReclaimable: 263508 kB' 'Slab: 1234844 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 971336 kB' 'KernelStack: 22000 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8199736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:41.434 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.435 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.435 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.436 17:13:01 -- setup/common.sh@33 -- # echo 1024 00:03:41.436 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.436 17:13:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.436 17:13:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.436 17:13:01 -- setup/hugepages.sh@27 -- # local node 00:03:41.436 17:13:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.436 17:13:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.436 17:13:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.436 17:13:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.436 17:13:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.436 17:13:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.436 17:13:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.436 17:13:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.436 17:13:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.436 17:13:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.436 17:13:01 -- setup/common.sh@18 -- # local node=0 00:03:41.436 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.436 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.436 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.436 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.436 17:13:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.436 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.436 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 20699112 kB' 'MemUsed: 11886256 kB' 'SwapCached: 0 kB' 'Active: 5028896 kB' 'Inactive: 3610408 kB' 'Active(anon): 4883236 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8477740 kB' 'Mapped: 53612 kB' 'AnonPages: 164776 kB' 'Shmem: 4721672 kB' 'KernelStack: 10008 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 645820 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 512536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.436 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.436 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.437 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.437 17:13:01 -- setup/common.sh@33 -- # echo 0 00:03:41.437 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.437 17:13:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.437 17:13:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.437 17:13:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.437 17:13:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.437 17:13:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.437 17:13:01 -- setup/common.sh@18 -- # local node=1 00:03:41.437 17:13:01 -- setup/common.sh@19 -- # local var val 00:03:41.437 17:13:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.437 17:13:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.437 17:13:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.437 17:13:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.437 17:13:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.437 17:13:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.437 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 22537728 kB' 'MemUsed: 5160708 kB' 'SwapCached: 0 kB' 'Active: 2388824 kB' 'Inactive: 93960 kB' 'Active(anon): 2136204 kB' 'Inactive(anon): 0 kB' 'Active(file): 252620 kB' 'Inactive(file): 93960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2113428 kB' 'Mapped: 103436 kB' 'AnonPages: 369480 kB' 'Shmem: 1766848 kB' 'KernelStack: 11976 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130224 kB' 'Slab: 589024 kB' 'SReclaimable: 130224 kB' 'SUnreclaim: 458800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.438 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.438 17:13:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # continue 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.439 17:13:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.439 17:13:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.439 17:13:01 -- setup/common.sh@33 -- # echo 0 00:03:41.439 17:13:01 -- setup/common.sh@33 -- # return 0 00:03:41.439 17:13:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.699 17:13:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.699 17:13:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.699 17:13:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.699 node0=512 expecting 512 00:03:41.699 17:13:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.699 17:13:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.699 17:13:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.699 17:13:01 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.699 node1=512 expecting 512 00:03:41.699 17:13:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.699 00:03:41.699 real 0m3.663s 00:03:41.699 user 0m1.360s 00:03:41.699 sys 0m2.370s 00:03:41.699 17:13:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:41.699 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:03:41.699 ************************************ 00:03:41.699 END TEST per_node_1G_alloc 00:03:41.699 ************************************ 00:03:41.699 17:13:01 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:41.699 17:13:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.699 17:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.699 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:03:41.699 ************************************ 00:03:41.699 START TEST even_2G_alloc 00:03:41.699 ************************************ 00:03:41.699 17:13:01 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:41.699 17:13:01 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:41.699 17:13:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.699 17:13:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.699 17:13:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.699 17:13:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.699 17:13:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.699 17:13:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.699 17:13:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.699 17:13:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.699 17:13:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.699 17:13:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.699 17:13:01 -- setup/hugepages.sh@83 -- # : 512 00:03:41.699 17:13:01 -- setup/hugepages.sh@84 -- # : 1 00:03:41.699 17:13:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.699 17:13:01 -- setup/hugepages.sh@83 -- # : 0 00:03:41.699 17:13:01 -- setup/hugepages.sh@84 -- # : 0 00:03:41.699 17:13:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.699 17:13:01 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:41.699 17:13:01 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:41.699 17:13:01 -- setup/hugepages.sh@153 -- # setup output 00:03:41.699 17:13:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.699 17:13:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:45.000 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.000 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.000 17:13:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:45.000 17:13:04 -- setup/hugepages.sh@89 -- # local node 00:03:45.000 17:13:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.000 17:13:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.000 17:13:04 -- setup/hugepages.sh@92 -- # local surp 00:03:45.000 17:13:04 -- setup/hugepages.sh@93 -- # local resv 00:03:45.000 17:13:04 -- setup/hugepages.sh@94 -- # local anon 00:03:45.000 17:13:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.000 17:13:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.000 17:13:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.000 17:13:04 -- setup/common.sh@18 -- # local node= 00:03:45.000 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.000 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.000 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.000 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.000 17:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.000 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.000 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.000 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.000 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43258552 kB' 'MemAvailable: 46996088 kB' 'Buffers: 4100 kB' 'Cached: 10587144 kB' 'SwapCached: 0 kB' 'Active: 7419184 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020904 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535524 kB' 'Mapped: 157076 kB' 'Shmem: 6488596 kB' 'KReclaimable: 263508 kB' 'Slab: 1232848 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969340 kB' 'KernelStack: 22016 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8200344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.001 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.001 17:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.002 17:13:04 -- setup/common.sh@33 -- # echo 0 00:03:45.002 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.002 17:13:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.002 17:13:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.002 17:13:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.002 17:13:04 -- setup/common.sh@18 -- # local node= 00:03:45.002 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.002 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.002 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.002 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.002 17:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.002 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.002 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43259280 kB' 'MemAvailable: 46996816 kB' 'Buffers: 4100 kB' 'Cached: 10587144 kB' 'SwapCached: 0 kB' 'Active: 7419220 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020940 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535592 kB' 'Mapped: 157068 kB' 'Shmem: 6488596 kB' 'KReclaimable: 263508 kB' 'Slab: 1232816 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969308 kB' 'KernelStack: 22000 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8200356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.002 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.002 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.003 17:13:04 -- setup/common.sh@33 -- # echo 0 00:03:45.003 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.003 17:13:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.003 17:13:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.003 17:13:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.003 17:13:04 -- setup/common.sh@18 -- # local node= 00:03:45.003 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.003 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.003 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.003 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.003 17:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.003 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.003 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43258184 kB' 'MemAvailable: 46995720 kB' 'Buffers: 4100 kB' 'Cached: 10587156 kB' 'SwapCached: 0 kB' 'Active: 7418796 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020516 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535108 kB' 'Mapped: 157068 kB' 'Shmem: 6488608 kB' 'KReclaimable: 263508 kB' 'Slab: 1232856 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969348 kB' 'KernelStack: 21952 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8200372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.003 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.003 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.004 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.004 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.005 17:13:04 -- setup/common.sh@33 -- # echo 0 00:03:45.005 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.005 17:13:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.005 17:13:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.005 nr_hugepages=1024 00:03:45.005 17:13:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.005 resv_hugepages=0 00:03:45.005 17:13:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.005 surplus_hugepages=0 00:03:45.005 17:13:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.005 anon_hugepages=0 00:03:45.005 17:13:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.005 17:13:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.005 17:13:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.005 17:13:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.005 17:13:04 -- setup/common.sh@18 -- # local node= 00:03:45.005 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.005 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.005 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.005 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.005 17:13:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.005 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.005 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.005 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43258184 kB' 'MemAvailable: 46995720 kB' 'Buffers: 4100 kB' 'Cached: 10587156 kB' 'SwapCached: 0 kB' 'Active: 7418872 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020592 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535216 kB' 'Mapped: 157068 kB' 'Shmem: 6488608 kB' 'KReclaimable: 263508 kB' 'Slab: 1232856 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969348 kB' 'KernelStack: 22000 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8200384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.005 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.005 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.006 17:13:04 -- setup/common.sh@33 -- # echo 1024 00:03:45.006 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.006 17:13:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.006 17:13:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.006 17:13:04 -- setup/hugepages.sh@27 -- # local node 00:03:45.006 17:13:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.006 17:13:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.006 17:13:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.006 17:13:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.006 17:13:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.006 17:13:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.006 17:13:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.006 17:13:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.006 17:13:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.006 17:13:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.006 17:13:04 -- setup/common.sh@18 -- # local node=0 00:03:45.006 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.006 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.006 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.006 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.006 17:13:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.006 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.006 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 20706696 kB' 'MemUsed: 11878672 kB' 'SwapCached: 0 kB' 'Active: 5028100 kB' 'Inactive: 3610408 kB' 'Active(anon): 4882440 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8477816 kB' 'Mapped: 53612 kB' 'AnonPages: 163804 kB' 'Shmem: 4721748 kB' 'KernelStack: 9944 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 644240 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 510956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.006 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.006 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.007 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.007 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.007 17:13:04 -- setup/common.sh@33 -- # echo 0 00:03:45.008 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.008 17:13:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.008 17:13:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.008 17:13:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.008 17:13:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.008 17:13:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.008 17:13:04 -- setup/common.sh@18 -- # local node=1 00:03:45.008 17:13:04 -- setup/common.sh@19 -- # local var val 00:03:45.008 17:13:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.008 17:13:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.008 17:13:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.008 17:13:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.008 17:13:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.008 17:13:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 22551236 kB' 'MemUsed: 5147200 kB' 'SwapCached: 0 kB' 'Active: 2390328 kB' 'Inactive: 93960 kB' 'Active(anon): 2137708 kB' 'Inactive(anon): 0 kB' 'Active(file): 252620 kB' 'Inactive(file): 93960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2113472 kB' 'Mapped: 103456 kB' 'AnonPages: 370928 kB' 'Shmem: 1766892 kB' 'KernelStack: 12024 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130224 kB' 'Slab: 588616 kB' 'SReclaimable: 130224 kB' 'SUnreclaim: 458392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.008 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.008 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # continue 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.009 17:13:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.009 17:13:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.009 17:13:04 -- setup/common.sh@33 -- # echo 0 00:03:45.009 17:13:04 -- setup/common.sh@33 -- # return 0 00:03:45.009 17:13:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.009 17:13:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.009 17:13:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.009 17:13:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.009 node0=512 expecting 512 00:03:45.009 17:13:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.009 17:13:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.009 17:13:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.009 17:13:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:45.009 node1=512 expecting 512 00:03:45.009 17:13:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.009 00:03:45.009 real 0m3.359s 00:03:45.009 user 0m1.253s 00:03:45.009 sys 0m2.155s 00:03:45.009 17:13:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.009 17:13:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.009 ************************************ 00:03:45.009 END TEST even_2G_alloc 00:03:45.009 ************************************ 00:03:45.009 17:13:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:45.009 17:13:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.009 17:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.009 17:13:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.009 ************************************ 00:03:45.009 START TEST odd_alloc 00:03:45.009 ************************************ 00:03:45.009 17:13:04 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:45.009 17:13:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:45.009 17:13:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:45.009 17:13:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:45.009 17:13:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.009 17:13:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.009 17:13:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.009 17:13:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:45.009 17:13:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.009 17:13:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.009 17:13:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.009 17:13:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.009 17:13:04 -- setup/hugepages.sh@83 -- # : 513 00:03:45.009 17:13:04 -- setup/hugepages.sh@84 -- # : 1 00:03:45.009 17:13:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:45.009 17:13:04 -- setup/hugepages.sh@83 -- # : 0 00:03:45.009 17:13:04 -- setup/hugepages.sh@84 -- # : 0 00:03:45.009 17:13:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.009 17:13:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:45.009 17:13:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:45.009 17:13:04 -- setup/hugepages.sh@160 -- # setup output 00:03:45.009 17:13:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.009 17:13:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.299 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.299 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.300 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.300 17:13:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.300 17:13:08 -- setup/hugepages.sh@89 -- # local node 00:03:48.300 17:13:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.300 17:13:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.300 17:13:08 -- setup/hugepages.sh@92 -- # local surp 00:03:48.300 17:13:08 -- setup/hugepages.sh@93 -- # local resv 00:03:48.300 17:13:08 -- setup/hugepages.sh@94 -- # local anon 00:03:48.300 17:13:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.300 17:13:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.300 17:13:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.300 17:13:08 -- setup/common.sh@18 -- # local node= 00:03:48.300 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.300 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.300 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.300 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.300 17:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.300 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.300 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43224084 kB' 'MemAvailable: 46961620 kB' 'Buffers: 4100 kB' 'Cached: 10587272 kB' 'SwapCached: 0 kB' 'Active: 7420044 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021764 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536292 kB' 'Mapped: 157092 kB' 'Shmem: 6488724 kB' 'KReclaimable: 263508 kB' 'Slab: 1233276 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969768 kB' 'KernelStack: 21984 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8201000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.564 17:13:08 -- setup/common.sh@33 -- # echo 0 00:03:48.564 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.564 17:13:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.564 17:13:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.564 17:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.564 17:13:08 -- setup/common.sh@18 -- # local node= 00:03:48.564 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.564 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.564 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.564 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.564 17:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.564 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.564 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.564 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43226624 kB' 'MemAvailable: 46964160 kB' 'Buffers: 4100 kB' 'Cached: 10587276 kB' 'SwapCached: 0 kB' 'Active: 7419492 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021212 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535720 kB' 'Mapped: 157076 kB' 'Shmem: 6488728 kB' 'KReclaimable: 263508 kB' 'Slab: 1233236 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969728 kB' 'KernelStack: 21984 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8201012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.564 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.564 17:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.565 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.565 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.566 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.566 17:13:08 -- setup/common.sh@33 -- # echo 0 00:03:48.566 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.566 17:13:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.566 17:13:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.566 17:13:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.566 17:13:08 -- setup/common.sh@18 -- # local node= 00:03:48.566 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.566 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.566 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.566 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.566 17:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.566 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.566 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.566 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43227972 kB' 'MemAvailable: 46965508 kB' 'Buffers: 4100 kB' 'Cached: 10587288 kB' 'SwapCached: 0 kB' 'Active: 7419496 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021216 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535720 kB' 'Mapped: 157076 kB' 'Shmem: 6488740 kB' 'KReclaimable: 263508 kB' 'Slab: 1233236 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969728 kB' 'KernelStack: 21984 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8201028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.567 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.567 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.568 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.568 17:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.569 17:13:08 -- setup/common.sh@33 -- # echo 0 00:03:48.569 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.569 17:13:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.569 17:13:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.569 nr_hugepages=1025 00:03:48.569 17:13:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.569 resv_hugepages=0 00:03:48.569 17:13:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.569 surplus_hugepages=0 00:03:48.569 17:13:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.569 anon_hugepages=0 00:03:48.569 17:13:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.569 17:13:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.569 17:13:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.569 17:13:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.569 17:13:08 -- setup/common.sh@18 -- # local node= 00:03:48.569 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.569 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.569 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.569 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.569 17:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.569 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.569 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43231120 kB' 'MemAvailable: 46968656 kB' 'Buffers: 4100 kB' 'Cached: 10587300 kB' 'SwapCached: 0 kB' 'Active: 7419780 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021500 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536112 kB' 'Mapped: 157076 kB' 'Shmem: 6488752 kB' 'KReclaimable: 263508 kB' 'Slab: 1233236 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969728 kB' 'KernelStack: 21984 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8202940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.569 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.569 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.570 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.570 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.570 17:13:08 -- setup/common.sh@33 -- # echo 1025 00:03:48.570 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.570 17:13:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.570 17:13:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.570 17:13:08 -- setup/hugepages.sh@27 -- # local node 00:03:48.570 17:13:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.570 17:13:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.570 17:13:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.570 17:13:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:48.570 17:13:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.570 17:13:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.570 17:13:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.570 17:13:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.570 17:13:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.570 17:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.570 17:13:08 -- setup/common.sh@18 -- # local node=0 00:03:48.570 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.570 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.570 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.570 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.570 17:13:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.571 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.571 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 20702008 kB' 'MemUsed: 11883360 kB' 'SwapCached: 0 kB' 'Active: 5028636 kB' 'Inactive: 3610408 kB' 'Active(anon): 4882976 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8477904 kB' 'Mapped: 53612 kB' 'AnonPages: 164348 kB' 'Shmem: 4721836 kB' 'KernelStack: 9992 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 644480 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 511196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.571 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.571 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@33 -- # echo 0 00:03:48.572 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.572 17:13:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.572 17:13:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.572 17:13:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.572 17:13:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.572 17:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.572 17:13:08 -- setup/common.sh@18 -- # local node=1 00:03:48.572 17:13:08 -- setup/common.sh@19 -- # local var val 00:03:48.572 17:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.572 17:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.572 17:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.572 17:13:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.572 17:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.572 17:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 22531124 kB' 'MemUsed: 5167312 kB' 'SwapCached: 0 kB' 'Active: 2391436 kB' 'Inactive: 93960 kB' 'Active(anon): 2138816 kB' 'Inactive(anon): 0 kB' 'Active(file): 252620 kB' 'Inactive(file): 93960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2113520 kB' 'Mapped: 103464 kB' 'AnonPages: 372108 kB' 'Shmem: 1766940 kB' 'KernelStack: 12104 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130224 kB' 'Slab: 588724 kB' 'SReclaimable: 130224 kB' 'SUnreclaim: 458500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.572 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.572 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # continue 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.573 17:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.573 17:13:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.573 17:13:08 -- setup/common.sh@33 -- # echo 0 00:03:48.573 17:13:08 -- setup/common.sh@33 -- # return 0 00:03:48.573 17:13:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.573 17:13:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.573 17:13:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.573 17:13:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:48.573 node0=512 expecting 513 00:03:48.573 17:13:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.573 17:13:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.573 17:13:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.573 17:13:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:48.573 node1=513 expecting 512 00:03:48.573 17:13:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:48.573 00:03:48.573 real 0m3.601s 00:03:48.573 user 0m1.403s 00:03:48.573 sys 0m2.267s 00:03:48.573 17:13:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.573 17:13:08 -- common/autotest_common.sh@10 -- # set +x 00:03:48.573 ************************************ 00:03:48.573 END TEST odd_alloc 00:03:48.573 ************************************ 00:03:48.573 17:13:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.573 17:13:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.573 17:13:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.573 17:13:08 -- common/autotest_common.sh@10 -- # set +x 00:03:48.573 ************************************ 00:03:48.573 START TEST custom_alloc 00:03:48.573 ************************************ 00:03:48.573 17:13:08 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:48.573 17:13:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.573 17:13:08 -- setup/hugepages.sh@169 -- # local node 00:03:48.573 17:13:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.573 17:13:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.573 17:13:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.573 17:13:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.573 17:13:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.573 17:13:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.573 17:13:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.573 17:13:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.573 17:13:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.573 17:13:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.573 17:13:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.573 17:13:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.573 17:13:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.573 17:13:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.573 17:13:08 -- setup/hugepages.sh@83 -- # : 256 00:03:48.573 17:13:08 -- setup/hugepages.sh@84 -- # : 1 00:03:48.573 17:13:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.573 17:13:08 -- setup/hugepages.sh@83 -- # : 0 00:03:48.573 17:13:08 -- setup/hugepages.sh@84 -- # : 0 00:03:48.573 17:13:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.573 17:13:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:48.573 17:13:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:48.574 17:13:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.574 17:13:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.574 17:13:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.574 17:13:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.574 17:13:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.574 17:13:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.574 17:13:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.574 17:13:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.574 17:13:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.574 17:13:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.574 17:13:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.574 17:13:08 -- setup/hugepages.sh@78 -- # return 0 00:03:48.574 17:13:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:48.574 17:13:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.574 17:13:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.574 17:13:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.574 17:13:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.574 17:13:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.574 17:13:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.574 17:13:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.574 17:13:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.574 17:13:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.574 17:13:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.574 17:13:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.574 17:13:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:48.574 17:13:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.574 17:13:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.574 17:13:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.574 17:13:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:48.574 17:13:08 -- setup/hugepages.sh@78 -- # return 0 00:03:48.574 17:13:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:48.574 17:13:08 -- setup/hugepages.sh@187 -- # setup output 00:03:48.574 17:13:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.574 17:13:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:51.877 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.877 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.138 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.138 17:13:11 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:52.138 17:13:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:52.138 17:13:11 -- setup/hugepages.sh@89 -- # local node 00:03:52.138 17:13:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.138 17:13:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.138 17:13:11 -- setup/hugepages.sh@92 -- # local surp 00:03:52.138 17:13:11 -- setup/hugepages.sh@93 -- # local resv 00:03:52.138 17:13:11 -- setup/hugepages.sh@94 -- # local anon 00:03:52.138 17:13:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.138 17:13:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.138 17:13:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.138 17:13:11 -- setup/common.sh@18 -- # local node= 00:03:52.138 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.138 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.138 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.138 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.138 17:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.138 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.138 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.138 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42179912 kB' 'MemAvailable: 45917448 kB' 'Buffers: 4100 kB' 'Cached: 10587412 kB' 'SwapCached: 0 kB' 'Active: 7420844 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022564 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536404 kB' 'Mapped: 157184 kB' 'Shmem: 6488864 kB' 'KReclaimable: 263508 kB' 'Slab: 1232952 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969444 kB' 'KernelStack: 22160 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8205972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218220 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:52.138 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.138 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.138 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.138 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.138 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.139 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.139 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.140 17:13:11 -- setup/common.sh@33 -- # echo 0 00:03:52.140 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.140 17:13:11 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.140 17:13:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.140 17:13:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.140 17:13:11 -- setup/common.sh@18 -- # local node= 00:03:52.140 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.140 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.140 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.140 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.140 17:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.140 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.140 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42183108 kB' 'MemAvailable: 45920644 kB' 'Buffers: 4100 kB' 'Cached: 10587416 kB' 'SwapCached: 0 kB' 'Active: 7420388 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022108 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535980 kB' 'Mapped: 157136 kB' 'Shmem: 6488868 kB' 'KReclaimable: 263508 kB' 'Slab: 1232932 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969424 kB' 'KernelStack: 22128 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8201428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.140 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.140 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.141 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.141 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.142 17:13:11 -- setup/common.sh@33 -- # echo 0 00:03:52.142 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.142 17:13:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.142 17:13:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.142 17:13:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.142 17:13:11 -- setup/common.sh@18 -- # local node= 00:03:52.142 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.142 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.142 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.142 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.142 17:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.142 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.142 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42183288 kB' 'MemAvailable: 45920824 kB' 'Buffers: 4100 kB' 'Cached: 10587432 kB' 'SwapCached: 0 kB' 'Active: 7418840 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020560 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534944 kB' 'Mapped: 157076 kB' 'Shmem: 6488884 kB' 'KReclaimable: 263508 kB' 'Slab: 1232924 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969416 kB' 'KernelStack: 21968 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8201448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.142 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.142 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.143 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.143 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.143 17:13:11 -- setup/common.sh@33 -- # echo 0 00:03:52.143 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.143 17:13:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.143 17:13:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:52.143 nr_hugepages=1536 00:03:52.143 17:13:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.143 resv_hugepages=0 00:03:52.143 17:13:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.143 surplus_hugepages=0 00:03:52.143 17:13:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.143 anon_hugepages=0 00:03:52.143 17:13:11 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.144 17:13:11 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:52.144 17:13:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.144 17:13:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.144 17:13:11 -- setup/common.sh@18 -- # local node= 00:03:52.144 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.144 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.144 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.144 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.144 17:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.144 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.144 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.144 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.144 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.144 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42183508 kB' 'MemAvailable: 45921044 kB' 'Buffers: 4100 kB' 'Cached: 10587448 kB' 'SwapCached: 0 kB' 'Active: 7418812 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020532 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534944 kB' 'Mapped: 157076 kB' 'Shmem: 6488900 kB' 'KReclaimable: 263508 kB' 'Slab: 1232924 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969416 kB' 'KernelStack: 21968 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8201596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:52.144 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.144 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.144 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.405 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.405 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.406 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.406 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.407 17:13:11 -- setup/common.sh@33 -- # echo 1536 00:03:52.407 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.407 17:13:11 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.407 17:13:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.407 17:13:11 -- setup/hugepages.sh@27 -- # local node 00:03:52.407 17:13:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.407 17:13:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.407 17:13:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.407 17:13:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.407 17:13:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.407 17:13:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.407 17:13:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.407 17:13:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.407 17:13:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.407 17:13:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.407 17:13:11 -- setup/common.sh@18 -- # local node=0 00:03:52.407 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.407 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.407 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.407 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.407 17:13:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.407 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.407 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 20704724 kB' 'MemUsed: 11880644 kB' 'SwapCached: 0 kB' 'Active: 5028196 kB' 'Inactive: 3610408 kB' 'Active(anon): 4882536 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8478016 kB' 'Mapped: 53612 kB' 'AnonPages: 163912 kB' 'Shmem: 4721948 kB' 'KernelStack: 9976 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 644444 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 511160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.407 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.407 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@33 -- # echo 0 00:03:52.408 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.408 17:13:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.408 17:13:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.408 17:13:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.408 17:13:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.408 17:13:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.408 17:13:11 -- setup/common.sh@18 -- # local node=1 00:03:52.408 17:13:11 -- setup/common.sh@19 -- # local var val 00:03:52.408 17:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.408 17:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.408 17:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.408 17:13:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.408 17:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.408 17:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.408 17:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 21479168 kB' 'MemUsed: 6219268 kB' 'SwapCached: 0 kB' 'Active: 2391112 kB' 'Inactive: 93960 kB' 'Active(anon): 2138492 kB' 'Inactive(anon): 0 kB' 'Active(file): 252620 kB' 'Inactive(file): 93960 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2113552 kB' 'Mapped: 103464 kB' 'AnonPages: 371572 kB' 'Shmem: 1766972 kB' 'KernelStack: 12040 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130224 kB' 'Slab: 588480 kB' 'SReclaimable: 130224 kB' 'SUnreclaim: 458256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.408 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.408 17:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # continue 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.409 17:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.409 17:13:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.409 17:13:11 -- setup/common.sh@33 -- # echo 0 00:03:52.409 17:13:11 -- setup/common.sh@33 -- # return 0 00:03:52.409 17:13:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.409 17:13:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.409 17:13:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.409 17:13:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.409 17:13:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.409 node0=512 expecting 512 00:03:52.409 17:13:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.409 17:13:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.409 17:13:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.409 17:13:11 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:52.409 node1=1024 expecting 1024 00:03:52.409 17:13:11 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.409 00:03:52.409 real 0m3.693s 00:03:52.409 user 0m1.393s 00:03:52.409 sys 0m2.365s 00:03:52.409 17:13:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.409 17:13:11 -- common/autotest_common.sh@10 -- # set +x 00:03:52.409 ************************************ 00:03:52.409 END TEST custom_alloc 00:03:52.409 ************************************ 00:03:52.409 17:13:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.409 17:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.409 17:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.409 17:13:12 -- common/autotest_common.sh@10 -- # set +x 00:03:52.409 ************************************ 00:03:52.409 START TEST no_shrink_alloc 00:03:52.409 ************************************ 00:03:52.409 17:13:12 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:52.409 17:13:12 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:52.410 17:13:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.410 17:13:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.410 17:13:12 -- setup/hugepages.sh@51 -- # shift 00:03:52.410 17:13:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.410 17:13:12 -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.410 17:13:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.410 17:13:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.410 17:13:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.410 17:13:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.410 17:13:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.410 17:13:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.410 17:13:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.410 17:13:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.410 17:13:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.410 17:13:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.410 17:13:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.410 17:13:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.410 17:13:12 -- setup/hugepages.sh@73 -- # return 0 00:03:52.410 17:13:12 -- setup/hugepages.sh@198 -- # setup output 00:03:52.410 17:13:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.410 17:13:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:55.848 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.848 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.849 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.849 17:13:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:55.849 17:13:15 -- setup/hugepages.sh@89 -- # local node 00:03:55.849 17:13:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.849 17:13:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.849 17:13:15 -- setup/hugepages.sh@92 -- # local surp 00:03:55.849 17:13:15 -- setup/hugepages.sh@93 -- # local resv 00:03:55.849 17:13:15 -- setup/hugepages.sh@94 -- # local anon 00:03:55.849 17:13:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.849 17:13:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.849 17:13:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.849 17:13:15 -- setup/common.sh@18 -- # local node= 00:03:55.849 17:13:15 -- setup/common.sh@19 -- # local var val 00:03:55.849 17:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.849 17:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.849 17:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.849 17:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.849 17:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.849 17:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43259496 kB' 'MemAvailable: 46997032 kB' 'Buffers: 4100 kB' 'Cached: 10587544 kB' 'SwapCached: 0 kB' 'Active: 7419560 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021280 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535528 kB' 'Mapped: 157144 kB' 'Shmem: 6488996 kB' 'KReclaimable: 263508 kB' 'Slab: 1232880 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969372 kB' 'KernelStack: 21920 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 17:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.850 17:13:15 -- setup/common.sh@33 -- # echo 0 00:03:55.850 17:13:15 -- setup/common.sh@33 -- # return 0 00:03:55.850 17:13:15 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.850 17:13:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.850 17:13:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.850 17:13:15 -- setup/common.sh@18 -- # local node= 00:03:55.850 17:13:15 -- setup/common.sh@19 -- # local var val 00:03:55.850 17:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.850 17:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.850 17:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.850 17:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.850 17:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.850 17:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.850 17:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43259972 kB' 'MemAvailable: 46997508 kB' 'Buffers: 4100 kB' 'Cached: 10587552 kB' 'SwapCached: 0 kB' 'Active: 7419988 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021708 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536020 kB' 'Mapped: 157116 kB' 'Shmem: 6489004 kB' 'KReclaimable: 263508 kB' 'Slab: 1232872 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969364 kB' 'KernelStack: 22000 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8204384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.850 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.851 17:13:15 -- setup/common.sh@33 -- # echo 0 00:03:55.851 17:13:15 -- setup/common.sh@33 -- # return 0 00:03:55.851 17:13:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.851 17:13:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.851 17:13:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.851 17:13:15 -- setup/common.sh@18 -- # local node= 00:03:55.851 17:13:15 -- setup/common.sh@19 -- # local var val 00:03:55.851 17:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.851 17:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.851 17:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.851 17:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.851 17:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.851 17:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43260568 kB' 'MemAvailable: 46998104 kB' 'Buffers: 4100 kB' 'Cached: 10587568 kB' 'SwapCached: 0 kB' 'Active: 7419140 kB' 'Inactive: 3704368 kB' 'Active(anon): 7020860 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535164 kB' 'Mapped: 157116 kB' 'Shmem: 6489020 kB' 'KReclaimable: 263508 kB' 'Slab: 1232960 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969452 kB' 'KernelStack: 21904 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.852 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.852 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.853 17:13:15 -- setup/common.sh@33 -- # echo 0 00:03:55.853 17:13:15 -- setup/common.sh@33 -- # return 0 00:03:55.853 17:13:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.853 17:13:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.853 nr_hugepages=1024 00:03:55.853 17:13:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.853 resv_hugepages=0 00:03:55.853 17:13:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.853 surplus_hugepages=0 00:03:55.853 17:13:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.853 anon_hugepages=0 00:03:55.853 17:13:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.853 17:13:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.853 17:13:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.853 17:13:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.853 17:13:15 -- setup/common.sh@18 -- # local node= 00:03:55.853 17:13:15 -- setup/common.sh@19 -- # local var val 00:03:55.853 17:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.853 17:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.853 17:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.853 17:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.853 17:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.853 17:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43259812 kB' 'MemAvailable: 46997348 kB' 'Buffers: 4100 kB' 'Cached: 10587572 kB' 'SwapCached: 0 kB' 'Active: 7420028 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021748 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536084 kB' 'Mapped: 157116 kB' 'Shmem: 6489024 kB' 'KReclaimable: 263508 kB' 'Slab: 1232896 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 969388 kB' 'KernelStack: 21984 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.853 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.853 17:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.854 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.854 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.854 17:13:15 -- setup/common.sh@33 -- # echo 1024 00:03:55.854 17:13:15 -- setup/common.sh@33 -- # return 0 00:03:55.854 17:13:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.854 17:13:15 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.854 17:13:15 -- setup/hugepages.sh@27 -- # local node 00:03:55.854 17:13:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.854 17:13:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.854 17:13:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.854 17:13:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.854 17:13:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.854 17:13:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.854 17:13:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.854 17:13:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.854 17:13:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.854 17:13:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.854 17:13:15 -- setup/common.sh@18 -- # local node=0 00:03:55.854 17:13:15 -- setup/common.sh@19 -- # local var val 00:03:55.854 17:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.854 17:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.854 17:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.854 17:13:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.854 17:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.854 17:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.855 17:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19671844 kB' 'MemUsed: 12913524 kB' 'SwapCached: 0 kB' 'Active: 5028972 kB' 'Inactive: 3610408 kB' 'Active(anon): 4883312 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8478152 kB' 'Mapped: 53612 kB' 'AnonPages: 164496 kB' 'Shmem: 4722084 kB' 'KernelStack: 9976 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 644320 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 511036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # continue 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 17:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 17:13:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.855 17:13:15 -- setup/common.sh@33 -- # echo 0 00:03:55.855 17:13:15 -- setup/common.sh@33 -- # return 0 00:03:55.855 17:13:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.855 17:13:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.855 17:13:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.855 17:13:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.855 17:13:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.855 node0=1024 expecting 1024 00:03:55.855 17:13:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.855 17:13:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.855 17:13:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.856 17:13:15 -- setup/hugepages.sh@202 -- # setup output 00:03:55.856 17:13:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.856 17:13:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.145 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.145 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.146 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.146 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.146 17:13:18 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:59.146 17:13:18 -- setup/hugepages.sh@89 -- # local node 00:03:59.146 17:13:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.146 17:13:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.146 17:13:18 -- setup/hugepages.sh@92 -- # local surp 00:03:59.146 17:13:18 -- setup/hugepages.sh@93 -- # local resv 00:03:59.146 17:13:18 -- setup/hugepages.sh@94 -- # local anon 00:03:59.146 17:13:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.146 17:13:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.146 17:13:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.146 17:13:18 -- setup/common.sh@18 -- # local node= 00:03:59.146 17:13:18 -- setup/common.sh@19 -- # local var val 00:03:59.146 17:13:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.146 17:13:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.146 17:13:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.146 17:13:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.146 17:13:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.146 17:13:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43267612 kB' 'MemAvailable: 47005148 kB' 'Buffers: 4100 kB' 'Cached: 10587660 kB' 'SwapCached: 0 kB' 'Active: 7420200 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021920 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536224 kB' 'Mapped: 157144 kB' 'Shmem: 6489112 kB' 'KReclaimable: 263508 kB' 'Slab: 1231980 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 968472 kB' 'KernelStack: 21920 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.146 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.146 17:13:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.147 17:13:18 -- setup/common.sh@33 -- # echo 0 00:03:59.147 17:13:18 -- setup/common.sh@33 -- # return 0 00:03:59.147 17:13:18 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.147 17:13:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.147 17:13:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.147 17:13:18 -- setup/common.sh@18 -- # local node= 00:03:59.147 17:13:18 -- setup/common.sh@19 -- # local var val 00:03:59.147 17:13:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.147 17:13:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.147 17:13:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.147 17:13:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.147 17:13:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.147 17:13:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43272824 kB' 'MemAvailable: 47010360 kB' 'Buffers: 4100 kB' 'Cached: 10587668 kB' 'SwapCached: 0 kB' 'Active: 7420296 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022016 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536344 kB' 'Mapped: 157136 kB' 'Shmem: 6489120 kB' 'KReclaimable: 263508 kB' 'Slab: 1231944 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 968436 kB' 'KernelStack: 21952 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.147 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.148 17:13:18 -- setup/common.sh@33 -- # echo 0 00:03:59.148 17:13:18 -- setup/common.sh@33 -- # return 0 00:03:59.148 17:13:18 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.148 17:13:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.148 17:13:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.148 17:13:18 -- setup/common.sh@18 -- # local node= 00:03:59.148 17:13:18 -- setup/common.sh@19 -- # local var val 00:03:59.148 17:13:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.148 17:13:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.148 17:13:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.148 17:13:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.148 17:13:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.148 17:13:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43274512 kB' 'MemAvailable: 47012048 kB' 'Buffers: 4100 kB' 'Cached: 10587684 kB' 'SwapCached: 0 kB' 'Active: 7420244 kB' 'Inactive: 3704368 kB' 'Active(anon): 7021964 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536196 kB' 'Mapped: 157088 kB' 'Shmem: 6489136 kB' 'KReclaimable: 263508 kB' 'Slab: 1231972 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 968464 kB' 'KernelStack: 21952 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8202880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.149 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.149 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.410 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.410 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.411 17:13:18 -- setup/common.sh@33 -- # echo 0 00:03:59.411 17:13:18 -- setup/common.sh@33 -- # return 0 00:03:59.411 17:13:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.411 17:13:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.411 nr_hugepages=1024 00:03:59.411 17:13:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.411 resv_hugepages=0 00:03:59.411 17:13:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.411 surplus_hugepages=0 00:03:59.411 17:13:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.411 anon_hugepages=0 00:03:59.411 17:13:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.411 17:13:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.411 17:13:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.411 17:13:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.411 17:13:18 -- setup/common.sh@18 -- # local node= 00:03:59.411 17:13:18 -- setup/common.sh@19 -- # local var val 00:03:59.411 17:13:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.411 17:13:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.411 17:13:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.411 17:13:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.411 17:13:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.411 17:13:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43275288 kB' 'MemAvailable: 47012824 kB' 'Buffers: 4100 kB' 'Cached: 10587696 kB' 'SwapCached: 0 kB' 'Active: 7420568 kB' 'Inactive: 3704368 kB' 'Active(anon): 7022288 kB' 'Inactive(anon): 0 kB' 'Active(file): 398280 kB' 'Inactive(file): 3704368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536528 kB' 'Mapped: 157088 kB' 'Shmem: 6489148 kB' 'KReclaimable: 263508 kB' 'Slab: 1231972 kB' 'SReclaimable: 263508 kB' 'SUnreclaim: 968464 kB' 'KernelStack: 21984 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8203260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2702708 kB' 'DirectMap2M: 26343424 kB' 'DirectMap1G: 40894464 kB' 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.411 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.411 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.412 17:13:18 -- setup/common.sh@33 -- # echo 1024 00:03:59.412 17:13:18 -- setup/common.sh@33 -- # return 0 00:03:59.412 17:13:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.412 17:13:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.412 17:13:18 -- setup/hugepages.sh@27 -- # local node 00:03:59.412 17:13:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.412 17:13:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.412 17:13:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.412 17:13:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.412 17:13:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.412 17:13:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.412 17:13:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.412 17:13:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.412 17:13:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.412 17:13:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.412 17:13:18 -- setup/common.sh@18 -- # local node=0 00:03:59.412 17:13:18 -- setup/common.sh@19 -- # local var val 00:03:59.412 17:13:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.412 17:13:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.412 17:13:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.412 17:13:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.412 17:13:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.412 17:13:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.412 17:13:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 19685148 kB' 'MemUsed: 12900220 kB' 'SwapCached: 0 kB' 'Active: 5029884 kB' 'Inactive: 3610408 kB' 'Active(anon): 4884224 kB' 'Inactive(anon): 0 kB' 'Active(file): 145660 kB' 'Inactive(file): 3610408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8478216 kB' 'Mapped: 53612 kB' 'AnonPages: 165404 kB' 'Shmem: 4722148 kB' 'KernelStack: 9992 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133284 kB' 'Slab: 643572 kB' 'SReclaimable: 133284 kB' 'SUnreclaim: 510288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.412 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.412 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # continue 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.413 17:13:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.413 17:13:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.413 17:13:18 -- setup/common.sh@33 -- # echo 0 00:03:59.413 17:13:18 -- setup/common.sh@33 -- # return 0 00:03:59.413 17:13:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.413 17:13:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.413 17:13:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.413 17:13:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.413 17:13:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.413 node0=1024 expecting 1024 00:03:59.413 17:13:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.413 00:03:59.413 real 0m6.952s 00:03:59.413 user 0m2.563s 00:03:59.413 sys 0m4.463s 00:03:59.413 17:13:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.413 17:13:18 -- common/autotest_common.sh@10 -- # set +x 00:03:59.413 ************************************ 00:03:59.413 END TEST no_shrink_alloc 00:03:59.413 ************************************ 00:03:59.413 17:13:19 -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.413 17:13:19 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.413 17:13:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.413 17:13:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.413 17:13:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.413 17:13:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.413 17:13:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.413 17:13:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.413 17:13:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.413 17:13:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.413 17:13:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.414 17:13:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.414 17:13:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.414 17:13:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.414 00:03:59.414 real 0m27.631s 00:03:59.414 user 0m9.664s 00:03:59.414 sys 0m16.458s 00:03:59.414 17:13:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.414 17:13:19 -- common/autotest_common.sh@10 -- # set +x 00:03:59.414 ************************************ 00:03:59.414 END TEST hugepages 00:03:59.414 ************************************ 00:03:59.414 17:13:19 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:59.414 17:13:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.414 17:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.414 17:13:19 -- common/autotest_common.sh@10 -- # set +x 00:03:59.414 ************************************ 00:03:59.414 START TEST driver 00:03:59.414 ************************************ 00:03:59.414 17:13:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:59.673 * Looking for test storage... 00:03:59.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:59.673 17:13:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.673 17:13:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.673 17:13:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.673 17:13:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.673 17:13:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.673 17:13:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.673 17:13:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.673 17:13:19 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.673 17:13:19 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.673 17:13:19 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.673 17:13:19 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.673 17:13:19 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.673 17:13:19 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.673 17:13:19 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.673 17:13:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.673 17:13:19 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.673 17:13:19 -- scripts/common.sh@344 -- # : 1 00:03:59.673 17:13:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.673 17:13:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.673 17:13:19 -- scripts/common.sh@364 -- # decimal 1 00:03:59.673 17:13:19 -- scripts/common.sh@352 -- # local d=1 00:03:59.673 17:13:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.673 17:13:19 -- scripts/common.sh@354 -- # echo 1 00:03:59.673 17:13:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.673 17:13:19 -- scripts/common.sh@365 -- # decimal 2 00:03:59.673 17:13:19 -- scripts/common.sh@352 -- # local d=2 00:03:59.673 17:13:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.673 17:13:19 -- scripts/common.sh@354 -- # echo 2 00:03:59.673 17:13:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.673 17:13:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.673 17:13:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.673 17:13:19 -- scripts/common.sh@367 -- # return 0 00:03:59.673 17:13:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.673 17:13:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.673 --rc genhtml_branch_coverage=1 00:03:59.673 --rc genhtml_function_coverage=1 00:03:59.673 --rc genhtml_legend=1 00:03:59.673 --rc geninfo_all_blocks=1 00:03:59.673 --rc geninfo_unexecuted_blocks=1 00:03:59.673 00:03:59.673 ' 00:03:59.673 17:13:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.673 --rc genhtml_branch_coverage=1 00:03:59.673 --rc genhtml_function_coverage=1 00:03:59.673 --rc genhtml_legend=1 00:03:59.673 --rc geninfo_all_blocks=1 00:03:59.673 --rc geninfo_unexecuted_blocks=1 00:03:59.673 00:03:59.673 ' 00:03:59.673 17:13:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.673 --rc genhtml_branch_coverage=1 00:03:59.673 --rc genhtml_function_coverage=1 00:03:59.673 --rc genhtml_legend=1 00:03:59.673 --rc geninfo_all_blocks=1 00:03:59.673 --rc geninfo_unexecuted_blocks=1 00:03:59.673 00:03:59.673 ' 00:03:59.673 17:13:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.673 --rc genhtml_branch_coverage=1 00:03:59.673 --rc genhtml_function_coverage=1 00:03:59.673 --rc genhtml_legend=1 00:03:59.673 --rc geninfo_all_blocks=1 00:03:59.673 --rc geninfo_unexecuted_blocks=1 00:03:59.673 00:03:59.673 ' 00:03:59.673 17:13:19 -- setup/driver.sh@68 -- # setup reset 00:03:59.673 17:13:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.673 17:13:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.950 17:13:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.950 17:13:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.950 17:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.950 17:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:04.950 ************************************ 00:04:04.950 START TEST guess_driver 00:04:04.950 ************************************ 00:04:04.950 17:13:24 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:04.950 17:13:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.950 17:13:24 -- setup/driver.sh@47 -- # local fail=0 00:04:04.950 17:13:24 -- setup/driver.sh@49 -- # pick_driver 00:04:04.950 17:13:24 -- setup/driver.sh@36 -- # vfio 00:04:04.950 17:13:24 -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.950 17:13:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.950 17:13:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.951 17:13:24 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:04.951 17:13:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.951 17:13:24 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:04.951 17:13:24 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:04.951 17:13:24 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:04.951 17:13:24 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:04.951 17:13:24 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:04.951 17:13:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:04.951 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:04.951 17:13:24 -- setup/driver.sh@30 -- # return 0 00:04:04.951 17:13:24 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:04.951 17:13:24 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:04.951 17:13:24 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.951 17:13:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:04.951 Looking for driver=vfio-pci 00:04:04.951 17:13:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.951 17:13:24 -- setup/driver.sh@45 -- # setup output config 00:04:04.951 17:13:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.951 17:13:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.238 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.238 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.238 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.239 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.239 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.239 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.239 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.239 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.239 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.239 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.239 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.239 17:13:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.239 17:13:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.239 17:13:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.143 17:13:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.143 17:13:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.143 17:13:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.401 17:13:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.401 17:13:29 -- setup/driver.sh@65 -- # setup reset 00:04:10.401 17:13:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.401 17:13:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.675 00:04:15.675 real 0m10.597s 00:04:15.675 user 0m2.639s 00:04:15.675 sys 0m5.322s 00:04:15.675 17:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.675 17:13:34 -- common/autotest_common.sh@10 -- # set +x 00:04:15.675 ************************************ 00:04:15.675 END TEST guess_driver 00:04:15.675 ************************************ 00:04:15.675 00:04:15.675 real 0m15.808s 00:04:15.675 user 0m4.200s 00:04:15.675 sys 0m8.233s 00:04:15.675 17:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.675 17:13:34 -- common/autotest_common.sh@10 -- # set +x 00:04:15.675 ************************************ 00:04:15.675 END TEST driver 00:04:15.675 ************************************ 00:04:15.675 17:13:34 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:15.675 17:13:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.675 17:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.675 17:13:34 -- common/autotest_common.sh@10 -- # set +x 00:04:15.675 ************************************ 00:04:15.675 START TEST devices 00:04:15.675 ************************************ 00:04:15.675 17:13:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:15.675 * Looking for test storage... 00:04:15.675 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:15.675 17:13:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:15.675 17:13:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:15.675 17:13:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:15.675 17:13:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:15.675 17:13:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:15.675 17:13:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:15.675 17:13:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:15.675 17:13:35 -- scripts/common.sh@335 -- # IFS=.-: 00:04:15.675 17:13:35 -- scripts/common.sh@335 -- # read -ra ver1 00:04:15.675 17:13:35 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.675 17:13:35 -- scripts/common.sh@336 -- # read -ra ver2 00:04:15.675 17:13:35 -- scripts/common.sh@337 -- # local 'op=<' 00:04:15.675 17:13:35 -- scripts/common.sh@339 -- # ver1_l=2 00:04:15.675 17:13:35 -- scripts/common.sh@340 -- # ver2_l=1 00:04:15.676 17:13:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:15.676 17:13:35 -- scripts/common.sh@343 -- # case "$op" in 00:04:15.676 17:13:35 -- scripts/common.sh@344 -- # : 1 00:04:15.676 17:13:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:15.676 17:13:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.676 17:13:35 -- scripts/common.sh@364 -- # decimal 1 00:04:15.676 17:13:35 -- scripts/common.sh@352 -- # local d=1 00:04:15.676 17:13:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.676 17:13:35 -- scripts/common.sh@354 -- # echo 1 00:04:15.676 17:13:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:15.676 17:13:35 -- scripts/common.sh@365 -- # decimal 2 00:04:15.676 17:13:35 -- scripts/common.sh@352 -- # local d=2 00:04:15.676 17:13:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.676 17:13:35 -- scripts/common.sh@354 -- # echo 2 00:04:15.676 17:13:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:15.676 17:13:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:15.676 17:13:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:15.676 17:13:35 -- scripts/common.sh@367 -- # return 0 00:04:15.676 17:13:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.676 17:13:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.676 --rc genhtml_branch_coverage=1 00:04:15.676 --rc genhtml_function_coverage=1 00:04:15.676 --rc genhtml_legend=1 00:04:15.676 --rc geninfo_all_blocks=1 00:04:15.676 --rc geninfo_unexecuted_blocks=1 00:04:15.676 00:04:15.676 ' 00:04:15.676 17:13:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.676 --rc genhtml_branch_coverage=1 00:04:15.676 --rc genhtml_function_coverage=1 00:04:15.676 --rc genhtml_legend=1 00:04:15.676 --rc geninfo_all_blocks=1 00:04:15.676 --rc geninfo_unexecuted_blocks=1 00:04:15.676 00:04:15.676 ' 00:04:15.676 17:13:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.676 --rc genhtml_branch_coverage=1 00:04:15.676 --rc genhtml_function_coverage=1 00:04:15.676 --rc genhtml_legend=1 00:04:15.676 --rc geninfo_all_blocks=1 00:04:15.676 --rc geninfo_unexecuted_blocks=1 00:04:15.676 00:04:15.676 ' 00:04:15.676 17:13:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.676 --rc genhtml_branch_coverage=1 00:04:15.676 --rc genhtml_function_coverage=1 00:04:15.676 --rc genhtml_legend=1 00:04:15.676 --rc geninfo_all_blocks=1 00:04:15.676 --rc geninfo_unexecuted_blocks=1 00:04:15.676 00:04:15.676 ' 00:04:15.676 17:13:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.676 17:13:35 -- setup/devices.sh@192 -- # setup reset 00:04:15.676 17:13:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.676 17:13:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.873 17:13:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.873 17:13:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:19.873 17:13:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:19.873 17:13:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:19.873 17:13:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:19.873 17:13:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:19.873 17:13:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:19.873 17:13:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.873 17:13:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:19.873 17:13:38 -- setup/devices.sh@196 -- # blocks=() 00:04:19.873 17:13:38 -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.873 17:13:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.873 17:13:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.873 17:13:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.873 17:13:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.873 17:13:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.873 17:13:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.873 17:13:38 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:19.873 17:13:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:19.873 17:13:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.873 17:13:38 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:19.873 17:13:38 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.873 No valid GPT data, bailing 00:04:19.873 17:13:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.873 17:13:38 -- scripts/common.sh@393 -- # pt= 00:04:19.873 17:13:38 -- scripts/common.sh@394 -- # return 1 00:04:19.873 17:13:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.873 17:13:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.873 17:13:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.873 17:13:38 -- setup/common.sh@80 -- # echo 2000398934016 00:04:19.873 17:13:38 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:19.873 17:13:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.873 17:13:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:19.873 17:13:38 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.873 17:13:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.873 17:13:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.873 17:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.873 17:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.873 17:13:38 -- common/autotest_common.sh@10 -- # set +x 00:04:19.873 ************************************ 00:04:19.873 START TEST nvme_mount 00:04:19.873 ************************************ 00:04:19.873 17:13:38 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:19.873 17:13:38 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.873 17:13:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.873 17:13:38 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.874 17:13:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.874 17:13:38 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.874 17:13:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.874 17:13:38 -- setup/common.sh@40 -- # local part_no=1 00:04:19.874 17:13:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:19.874 17:13:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.874 17:13:38 -- setup/common.sh@44 -- # parts=() 00:04:19.874 17:13:38 -- setup/common.sh@44 -- # local parts 00:04:19.874 17:13:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.874 17:13:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.874 17:13:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.874 17:13:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.874 17:13:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.874 17:13:38 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.874 17:13:38 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.874 17:13:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.446 Creating new GPT entries in memory. 00:04:20.446 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.446 other utilities. 00:04:20.446 17:13:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.446 17:13:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.446 17:13:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.446 17:13:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.446 17:13:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:21.386 Creating new GPT entries in memory. 00:04:21.386 The operation has completed successfully. 00:04:21.386 17:13:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.386 17:13:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.386 17:13:40 -- setup/common.sh@62 -- # wait 2500019 00:04:21.386 17:13:41 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.386 17:13:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:21.386 17:13:41 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.386 17:13:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:21.386 17:13:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:21.386 17:13:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.386 17:13:41 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.386 17:13:41 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:21.386 17:13:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:21.386 17:13:41 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.386 17:13:41 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.386 17:13:41 -- setup/devices.sh@53 -- # local found=0 00:04:21.386 17:13:41 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.386 17:13:41 -- setup/devices.sh@56 -- # : 00:04:21.386 17:13:41 -- setup/devices.sh@59 -- # local pci status 00:04:21.386 17:13:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.386 17:13:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:21.386 17:13:41 -- setup/devices.sh@47 -- # setup output config 00:04:21.386 17:13:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.386 17:13:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.677 17:13:44 -- setup/devices.sh@63 -- # found=1 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.677 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.677 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.935 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.935 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.935 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.935 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.935 17:13:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.935 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.935 17:13:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.935 17:13:44 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.935 17:13:44 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.935 17:13:44 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.935 17:13:44 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.935 17:13:44 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.935 17:13:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.936 17:13:44 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.936 17:13:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.936 17:13:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.936 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.936 17:13:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.936 17:13:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.194 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.194 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.194 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.194 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.194 17:13:44 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:25.194 17:13:44 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:25.194 17:13:44 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.194 17:13:44 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:25.194 17:13:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:25.194 17:13:44 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.452 17:13:44 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.452 17:13:44 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:25.452 17:13:44 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:25.452 17:13:44 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.452 17:13:44 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.452 17:13:44 -- setup/devices.sh@53 -- # local found=0 00:04:25.452 17:13:44 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.452 17:13:44 -- setup/devices.sh@56 -- # : 00:04:25.452 17:13:44 -- setup/devices.sh@59 -- # local pci status 00:04:25.452 17:13:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.452 17:13:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.452 17:13:44 -- setup/devices.sh@47 -- # setup output config 00:04:25.452 17:13:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.452 17:13:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.739 17:13:48 -- setup/devices.sh@63 -- # found=1 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.739 17:13:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.739 17:13:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.739 17:13:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.739 17:13:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.739 17:13:48 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.739 17:13:48 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:28.739 17:13:48 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.739 17:13:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:28.739 17:13:48 -- setup/devices.sh@50 -- # local mount_point= 00:04:28.739 17:13:48 -- setup/devices.sh@51 -- # local test_file= 00:04:28.739 17:13:48 -- setup/devices.sh@53 -- # local found=0 00:04:28.739 17:13:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.739 17:13:48 -- setup/devices.sh@59 -- # local pci status 00:04:28.739 17:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.739 17:13:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.739 17:13:48 -- setup/devices.sh@47 -- # setup output config 00:04:28.739 17:13:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.739 17:13:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:31.272 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.272 17:13:51 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:31.272 17:13:51 -- setup/devices.sh@63 -- # found=1 00:04:31.272 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.272 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.272 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.272 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.272 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.530 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.530 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.530 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.530 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.530 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.530 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.530 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.530 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.530 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.530 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.531 17:13:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.531 17:13:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.531 17:13:51 -- setup/devices.sh@68 -- # return 0 00:04:31.531 17:13:51 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.531 17:13:51 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.531 17:13:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.531 17:13:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.531 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.531 00:04:31.531 real 0m12.345s 00:04:31.531 user 0m3.433s 00:04:31.531 sys 0m6.764s 00:04:31.531 17:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.531 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:31.531 ************************************ 00:04:31.531 END TEST nvme_mount 00:04:31.531 ************************************ 00:04:31.531 17:13:51 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.531 17:13:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.531 17:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.531 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:31.790 ************************************ 00:04:31.790 START TEST dm_mount 00:04:31.790 ************************************ 00:04:31.790 17:13:51 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:31.790 17:13:51 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:31.790 17:13:51 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:31.790 17:13:51 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:31.790 17:13:51 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:31.790 17:13:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.790 17:13:51 -- setup/common.sh@40 -- # local part_no=2 00:04:31.790 17:13:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:31.790 17:13:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.790 17:13:51 -- setup/common.sh@44 -- # parts=() 00:04:31.790 17:13:51 -- setup/common.sh@44 -- # local parts 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.790 17:13:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.790 17:13:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.790 17:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.790 17:13:51 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.790 17:13:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.790 17:13:51 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:32.727 Creating new GPT entries in memory. 00:04:32.727 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.727 other utilities. 00:04:32.727 17:13:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.727 17:13:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.727 17:13:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.727 17:13:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.727 17:13:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:33.663 Creating new GPT entries in memory. 00:04:33.663 The operation has completed successfully. 00:04:33.663 17:13:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:33.663 17:13:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.663 17:13:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.663 17:13:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.663 17:13:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:34.599 The operation has completed successfully. 00:04:34.599 17:13:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:34.599 17:13:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.599 17:13:54 -- setup/common.sh@62 -- # wait 2504528 00:04:34.858 17:13:54 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:34.858 17:13:54 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.858 17:13:54 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.858 17:13:54 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:34.858 17:13:54 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:34.858 17:13:54 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.858 17:13:54 -- setup/devices.sh@161 -- # break 00:04:34.858 17:13:54 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.858 17:13:54 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:34.858 17:13:54 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:34.858 17:13:54 -- setup/devices.sh@166 -- # dm=dm-2 00:04:34.858 17:13:54 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:34.858 17:13:54 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:34.858 17:13:54 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.858 17:13:54 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:34.858 17:13:54 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.858 17:13:54 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.858 17:13:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:34.858 17:13:54 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.858 17:13:54 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.858 17:13:54 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.858 17:13:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:34.858 17:13:54 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:34.858 17:13:54 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.858 17:13:54 -- setup/devices.sh@53 -- # local found=0 00:04:34.858 17:13:54 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.858 17:13:54 -- setup/devices.sh@56 -- # : 00:04:34.858 17:13:54 -- setup/devices.sh@59 -- # local pci status 00:04:34.858 17:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.858 17:13:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.858 17:13:54 -- setup/devices.sh@47 -- # setup output config 00:04:34.858 17:13:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.858 17:13:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.198 17:13:57 -- setup/devices.sh@63 -- # found=1 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.198 17:13:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.198 17:13:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:38.198 17:13:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.198 17:13:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.198 17:13:57 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:38.198 17:13:57 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:38.198 17:13:57 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:38.198 17:13:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:38.198 17:13:57 -- setup/devices.sh@50 -- # local mount_point= 00:04:38.198 17:13:57 -- setup/devices.sh@51 -- # local test_file= 00:04:38.198 17:13:57 -- setup/devices.sh@53 -- # local found=0 00:04:38.198 17:13:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.198 17:13:57 -- setup/devices.sh@59 -- # local pci status 00:04:38.198 17:13:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.198 17:13:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:38.198 17:13:57 -- setup/devices.sh@47 -- # setup output config 00:04:38.198 17:13:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.198 17:13:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:41.589 17:14:01 -- setup/devices.sh@63 -- # found=1 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.589 17:14:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.589 17:14:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.879 17:14:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.879 17:14:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.879 17:14:01 -- setup/devices.sh@68 -- # return 0 00:04:41.879 17:14:01 -- setup/devices.sh@187 -- # cleanup_dm 00:04:41.879 17:14:01 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.879 17:14:01 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.879 17:14:01 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:41.879 17:14:01 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.879 17:14:01 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:41.879 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.879 17:14:01 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.879 17:14:01 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:41.879 00:04:41.879 real 0m10.141s 00:04:41.879 user 0m2.466s 00:04:41.879 sys 0m4.775s 00:04:41.879 17:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.879 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:04:41.879 ************************************ 00:04:41.879 END TEST dm_mount 00:04:41.879 ************************************ 00:04:41.879 17:14:01 -- setup/devices.sh@1 -- # cleanup 00:04:41.879 17:14:01 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:41.879 17:14:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.879 17:14:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.879 17:14:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.879 17:14:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.879 17:14:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.139 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:42.139 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:42.139 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.139 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.139 17:14:01 -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.139 17:14:01 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.139 17:14:01 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.139 17:14:01 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.139 17:14:01 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.139 17:14:01 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.139 17:14:01 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.139 00:04:42.139 real 0m26.823s 00:04:42.139 user 0m7.351s 00:04:42.139 sys 0m14.349s 00:04:42.139 17:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.139 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.139 ************************************ 00:04:42.139 END TEST devices 00:04:42.139 ************************************ 00:04:42.139 00:04:42.139 real 1m35.293s 00:04:42.139 user 0m29.094s 00:04:42.139 sys 0m53.957s 00:04:42.139 17:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.139 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.139 ************************************ 00:04:42.139 END TEST setup.sh 00:04:42.139 ************************************ 00:04:42.139 17:14:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:45.434 Hugepages 00:04:45.434 node hugesize free / total 00:04:45.434 node0 1048576kB 0 / 0 00:04:45.434 node0 2048kB 2048 / 2048 00:04:45.434 node1 1048576kB 0 / 0 00:04:45.434 node1 2048kB 0 / 0 00:04:45.434 00:04:45.434 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.434 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.434 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.695 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:45.695 17:14:05 -- spdk/autotest.sh@128 -- # uname -s 00:04:45.695 17:14:05 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:45.695 17:14:05 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:45.695 17:14:05 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:48.991 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.991 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:49.251 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:49.251 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:49.251 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:49.251 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:49.251 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.164 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.164 17:14:10 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:52.544 17:14:11 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:52.544 17:14:11 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:52.544 17:14:11 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.544 17:14:11 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:52.544 17:14:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:52.544 17:14:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:52.544 17:14:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.544 17:14:11 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.544 17:14:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:52.544 17:14:11 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:52.544 17:14:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:52.545 17:14:11 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.832 Waiting for block devices as requested 00:04:55.832 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.832 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.832 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.832 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:56.090 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:56.090 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:56.090 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:56.349 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:56.349 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:56.349 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:56.608 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:56.608 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:56.608 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:56.867 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:56.867 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:56.867 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:57.125 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:57.125 17:14:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:57.125 17:14:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:04:57.125 17:14:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:57.125 17:14:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:57.125 17:14:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:57.125 17:14:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:57.126 17:14:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:57.126 17:14:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.126 17:14:16 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:04:57.126 17:14:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:57.126 17:14:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:57.126 17:14:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:57.126 17:14:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:57.126 17:14:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:57.126 17:14:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:57.126 17:14:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:57.126 17:14:16 -- common/autotest_common.sh@1552 -- # continue 00:04:57.126 17:14:16 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:57.126 17:14:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.126 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.384 17:14:16 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:57.384 17:14:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.384 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.384 17:14:16 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:00.671 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.671 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.930 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.836 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.836 17:14:22 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:02.836 17:14:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.836 17:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:02.836 17:14:22 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:02.836 17:14:22 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:02.836 17:14:22 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.837 17:14:22 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:02.837 17:14:22 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:02.837 17:14:22 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:02.837 17:14:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:02.837 17:14:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:02.837 17:14:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.837 17:14:22 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.837 17:14:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:03.096 17:14:22 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:03.096 17:14:22 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:03.096 17:14:22 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:03.096 17:14:22 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:03.096 17:14:22 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:03.096 17:14:22 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:03.096 17:14:22 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:03.096 17:14:22 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:03.096 17:14:22 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:03.096 17:14:22 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=2514751 00:05:03.096 17:14:22 -- common/autotest_common.sh@1593 -- # waitforlisten 2514751 00:05:03.096 17:14:22 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.096 17:14:22 -- common/autotest_common.sh@829 -- # '[' -z 2514751 ']' 00:05:03.096 17:14:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.096 17:14:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.096 17:14:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.096 17:14:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.096 17:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.096 [2024-11-09 17:14:22.696263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.096 [2024-11-09 17:14:22.696311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514751 ] 00:05:03.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.096 [2024-11-09 17:14:22.767010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.096 [2024-11-09 17:14:22.842808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:03.096 [2024-11-09 17:14:22.842929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.031 17:14:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.031 17:14:23 -- common/autotest_common.sh@862 -- # return 0 00:05:04.031 17:14:23 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:04.031 17:14:23 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:04.031 17:14:23 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:07.317 nvme0n1 00:05:07.317 17:14:26 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:07.317 [2024-11-09 17:14:26.664172] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:07.317 request: 00:05:07.317 { 00:05:07.317 "nvme_ctrlr_name": "nvme0", 00:05:07.317 "password": "test", 00:05:07.317 "method": "bdev_nvme_opal_revert", 00:05:07.317 "req_id": 1 00:05:07.317 } 00:05:07.317 Got JSON-RPC error response 00:05:07.317 response: 00:05:07.317 { 00:05:07.317 "code": -32602, 00:05:07.317 "message": "Invalid parameters" 00:05:07.317 } 00:05:07.317 17:14:26 -- common/autotest_common.sh@1599 -- # true 00:05:07.317 17:14:26 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:07.317 17:14:26 -- common/autotest_common.sh@1603 -- # killprocess 2514751 00:05:07.317 17:14:26 -- common/autotest_common.sh@936 -- # '[' -z 2514751 ']' 00:05:07.317 17:14:26 -- common/autotest_common.sh@940 -- # kill -0 2514751 00:05:07.317 17:14:26 -- common/autotest_common.sh@941 -- # uname 00:05:07.317 17:14:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.317 17:14:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2514751 00:05:07.317 17:14:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.317 17:14:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.317 17:14:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2514751' 00:05:07.317 killing process with pid 2514751 00:05:07.317 17:14:26 -- common/autotest_common.sh@955 -- # kill 2514751 00:05:07.317 17:14:26 -- common/autotest_common.sh@960 -- # wait 2514751 00:05:09.848 17:14:29 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:09.848 17:14:29 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:09.848 17:14:29 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:09.848 17:14:29 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:09.848 17:14:29 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:09.848 17:14:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.848 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.848 17:14:29 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:09.848 17:14:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.848 17:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.848 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.848 ************************************ 00:05:09.848 START TEST env 00:05:09.848 ************************************ 00:05:09.848 17:14:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:09.848 * Looking for test storage... 00:05:09.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:09.848 17:14:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.848 17:14:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.848 17:14:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.848 17:14:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.848 17:14:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.848 17:14:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.848 17:14:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.848 17:14:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.848 17:14:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.848 17:14:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.848 17:14:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.848 17:14:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.848 17:14:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.848 17:14:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.848 17:14:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.848 17:14:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.848 17:14:29 -- scripts/common.sh@344 -- # : 1 00:05:09.848 17:14:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.848 17:14:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.848 17:14:29 -- scripts/common.sh@364 -- # decimal 1 00:05:09.848 17:14:29 -- scripts/common.sh@352 -- # local d=1 00:05:09.848 17:14:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.848 17:14:29 -- scripts/common.sh@354 -- # echo 1 00:05:09.848 17:14:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.848 17:14:29 -- scripts/common.sh@365 -- # decimal 2 00:05:09.848 17:14:29 -- scripts/common.sh@352 -- # local d=2 00:05:09.848 17:14:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.848 17:14:29 -- scripts/common.sh@354 -- # echo 2 00:05:09.848 17:14:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.848 17:14:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.848 17:14:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.848 17:14:29 -- scripts/common.sh@367 -- # return 0 00:05:09.848 17:14:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.848 17:14:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.848 --rc genhtml_branch_coverage=1 00:05:09.848 --rc genhtml_function_coverage=1 00:05:09.848 --rc genhtml_legend=1 00:05:09.848 --rc geninfo_all_blocks=1 00:05:09.848 --rc geninfo_unexecuted_blocks=1 00:05:09.848 00:05:09.848 ' 00:05:09.848 17:14:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.848 --rc genhtml_branch_coverage=1 00:05:09.848 --rc genhtml_function_coverage=1 00:05:09.849 --rc genhtml_legend=1 00:05:09.849 --rc geninfo_all_blocks=1 00:05:09.849 --rc geninfo_unexecuted_blocks=1 00:05:09.849 00:05:09.849 ' 00:05:09.849 17:14:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.849 --rc genhtml_branch_coverage=1 00:05:09.849 --rc genhtml_function_coverage=1 00:05:09.849 --rc genhtml_legend=1 00:05:09.849 --rc geninfo_all_blocks=1 00:05:09.849 --rc geninfo_unexecuted_blocks=1 00:05:09.849 00:05:09.849 ' 00:05:09.849 17:14:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.849 --rc genhtml_branch_coverage=1 00:05:09.849 --rc genhtml_function_coverage=1 00:05:09.849 --rc genhtml_legend=1 00:05:09.849 --rc geninfo_all_blocks=1 00:05:09.849 --rc geninfo_unexecuted_blocks=1 00:05:09.849 00:05:09.849 ' 00:05:09.849 17:14:29 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.849 17:14:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.849 17:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.849 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.849 ************************************ 00:05:09.849 START TEST env_memory 00:05:09.849 ************************************ 00:05:09.849 17:14:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.849 00:05:09.849 00:05:09.849 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.849 http://cunit.sourceforge.net/ 00:05:09.849 00:05:09.849 00:05:09.849 Suite: memory 00:05:09.849 Test: alloc and free memory map ...[2024-11-09 17:14:29.468441] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.849 passed 00:05:09.849 Test: mem map translation ...[2024-11-09 17:14:29.486343] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.849 [2024-11-09 17:14:29.486360] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.849 [2024-11-09 17:14:29.486393] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.849 [2024-11-09 17:14:29.486401] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.849 passed 00:05:09.849 Test: mem map registration ...[2024-11-09 17:14:29.521931] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.849 [2024-11-09 17:14:29.521947] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.849 passed 00:05:09.849 Test: mem map adjacent registrations ...passed 00:05:09.849 00:05:09.849 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.849 suites 1 1 n/a 0 0 00:05:09.849 tests 4 4 4 0 0 00:05:09.849 asserts 152 152 152 0 n/a 00:05:09.849 00:05:09.849 Elapsed time = 0.130 seconds 00:05:09.849 00:05:09.849 real 0m0.143s 00:05:09.849 user 0m0.128s 00:05:09.849 sys 0m0.014s 00:05:09.849 17:14:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.849 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.849 ************************************ 00:05:09.849 END TEST env_memory 00:05:09.849 ************************************ 00:05:09.849 17:14:29 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.849 17:14:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.849 17:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.849 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.849 ************************************ 00:05:09.849 START TEST env_vtophys 00:05:09.849 ************************************ 00:05:09.849 17:14:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.108 EAL: lib.eal log level changed from notice to debug 00:05:10.108 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.108 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.108 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.108 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.108 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.108 EAL: Detected lcore 5 as core 5 on socket 0 00:05:10.108 EAL: Detected lcore 6 as core 6 on socket 0 00:05:10.108 EAL: Detected lcore 7 as core 8 on socket 0 00:05:10.108 EAL: Detected lcore 8 as core 9 on socket 0 00:05:10.108 EAL: Detected lcore 9 as core 10 on socket 0 00:05:10.108 EAL: Detected lcore 10 as core 11 on socket 0 00:05:10.108 EAL: Detected lcore 11 as core 12 on socket 0 00:05:10.108 EAL: Detected lcore 12 as core 13 on socket 0 00:05:10.108 EAL: Detected lcore 13 as core 14 on socket 0 00:05:10.108 EAL: Detected lcore 14 as core 16 on socket 0 00:05:10.108 EAL: Detected lcore 15 as core 17 on socket 0 00:05:10.108 EAL: Detected lcore 16 as core 18 on socket 0 00:05:10.108 EAL: Detected lcore 17 as core 19 on socket 0 00:05:10.108 EAL: Detected lcore 18 as core 20 on socket 0 00:05:10.108 EAL: Detected lcore 19 as core 21 on socket 0 00:05:10.108 EAL: Detected lcore 20 as core 22 on socket 0 00:05:10.108 EAL: Detected lcore 21 as core 24 on socket 0 00:05:10.108 EAL: Detected lcore 22 as core 25 on socket 0 00:05:10.108 EAL: Detected lcore 23 as core 26 on socket 0 00:05:10.108 EAL: Detected lcore 24 as core 27 on socket 0 00:05:10.108 EAL: Detected lcore 25 as core 28 on socket 0 00:05:10.108 EAL: Detected lcore 26 as core 29 on socket 0 00:05:10.108 EAL: Detected lcore 27 as core 30 on socket 0 00:05:10.108 EAL: Detected lcore 28 as core 0 on socket 1 00:05:10.108 EAL: Detected lcore 29 as core 1 on socket 1 00:05:10.108 EAL: Detected lcore 30 as core 2 on socket 1 00:05:10.108 EAL: Detected lcore 31 as core 3 on socket 1 00:05:10.108 EAL: Detected lcore 32 as core 4 on socket 1 00:05:10.108 EAL: Detected lcore 33 as core 5 on socket 1 00:05:10.108 EAL: Detected lcore 34 as core 6 on socket 1 00:05:10.108 EAL: Detected lcore 35 as core 8 on socket 1 00:05:10.108 EAL: Detected lcore 36 as core 9 on socket 1 00:05:10.108 EAL: Detected lcore 37 as core 10 on socket 1 00:05:10.108 EAL: Detected lcore 38 as core 11 on socket 1 00:05:10.108 EAL: Detected lcore 39 as core 12 on socket 1 00:05:10.108 EAL: Detected lcore 40 as core 13 on socket 1 00:05:10.108 EAL: Detected lcore 41 as core 14 on socket 1 00:05:10.108 EAL: Detected lcore 42 as core 16 on socket 1 00:05:10.108 EAL: Detected lcore 43 as core 17 on socket 1 00:05:10.108 EAL: Detected lcore 44 as core 18 on socket 1 00:05:10.108 EAL: Detected lcore 45 as core 19 on socket 1 00:05:10.108 EAL: Detected lcore 46 as core 20 on socket 1 00:05:10.108 EAL: Detected lcore 47 as core 21 on socket 1 00:05:10.108 EAL: Detected lcore 48 as core 22 on socket 1 00:05:10.108 EAL: Detected lcore 49 as core 24 on socket 1 00:05:10.108 EAL: Detected lcore 50 as core 25 on socket 1 00:05:10.108 EAL: Detected lcore 51 as core 26 on socket 1 00:05:10.108 EAL: Detected lcore 52 as core 27 on socket 1 00:05:10.108 EAL: Detected lcore 53 as core 28 on socket 1 00:05:10.108 EAL: Detected lcore 54 as core 29 on socket 1 00:05:10.108 EAL: Detected lcore 55 as core 30 on socket 1 00:05:10.108 EAL: Detected lcore 56 as core 0 on socket 0 00:05:10.108 EAL: Detected lcore 57 as core 1 on socket 0 00:05:10.108 EAL: Detected lcore 58 as core 2 on socket 0 00:05:10.108 EAL: Detected lcore 59 as core 3 on socket 0 00:05:10.108 EAL: Detected lcore 60 as core 4 on socket 0 00:05:10.108 EAL: Detected lcore 61 as core 5 on socket 0 00:05:10.108 EAL: Detected lcore 62 as core 6 on socket 0 00:05:10.108 EAL: Detected lcore 63 as core 8 on socket 0 00:05:10.108 EAL: Detected lcore 64 as core 9 on socket 0 00:05:10.108 EAL: Detected lcore 65 as core 10 on socket 0 00:05:10.108 EAL: Detected lcore 66 as core 11 on socket 0 00:05:10.108 EAL: Detected lcore 67 as core 12 on socket 0 00:05:10.108 EAL: Detected lcore 68 as core 13 on socket 0 00:05:10.108 EAL: Detected lcore 69 as core 14 on socket 0 00:05:10.108 EAL: Detected lcore 70 as core 16 on socket 0 00:05:10.108 EAL: Detected lcore 71 as core 17 on socket 0 00:05:10.108 EAL: Detected lcore 72 as core 18 on socket 0 00:05:10.108 EAL: Detected lcore 73 as core 19 on socket 0 00:05:10.108 EAL: Detected lcore 74 as core 20 on socket 0 00:05:10.108 EAL: Detected lcore 75 as core 21 on socket 0 00:05:10.108 EAL: Detected lcore 76 as core 22 on socket 0 00:05:10.108 EAL: Detected lcore 77 as core 24 on socket 0 00:05:10.108 EAL: Detected lcore 78 as core 25 on socket 0 00:05:10.108 EAL: Detected lcore 79 as core 26 on socket 0 00:05:10.108 EAL: Detected lcore 80 as core 27 on socket 0 00:05:10.108 EAL: Detected lcore 81 as core 28 on socket 0 00:05:10.108 EAL: Detected lcore 82 as core 29 on socket 0 00:05:10.108 EAL: Detected lcore 83 as core 30 on socket 0 00:05:10.108 EAL: Detected lcore 84 as core 0 on socket 1 00:05:10.108 EAL: Detected lcore 85 as core 1 on socket 1 00:05:10.108 EAL: Detected lcore 86 as core 2 on socket 1 00:05:10.108 EAL: Detected lcore 87 as core 3 on socket 1 00:05:10.108 EAL: Detected lcore 88 as core 4 on socket 1 00:05:10.108 EAL: Detected lcore 89 as core 5 on socket 1 00:05:10.108 EAL: Detected lcore 90 as core 6 on socket 1 00:05:10.108 EAL: Detected lcore 91 as core 8 on socket 1 00:05:10.108 EAL: Detected lcore 92 as core 9 on socket 1 00:05:10.108 EAL: Detected lcore 93 as core 10 on socket 1 00:05:10.108 EAL: Detected lcore 94 as core 11 on socket 1 00:05:10.108 EAL: Detected lcore 95 as core 12 on socket 1 00:05:10.108 EAL: Detected lcore 96 as core 13 on socket 1 00:05:10.108 EAL: Detected lcore 97 as core 14 on socket 1 00:05:10.108 EAL: Detected lcore 98 as core 16 on socket 1 00:05:10.108 EAL: Detected lcore 99 as core 17 on socket 1 00:05:10.108 EAL: Detected lcore 100 as core 18 on socket 1 00:05:10.108 EAL: Detected lcore 101 as core 19 on socket 1 00:05:10.108 EAL: Detected lcore 102 as core 20 on socket 1 00:05:10.108 EAL: Detected lcore 103 as core 21 on socket 1 00:05:10.108 EAL: Detected lcore 104 as core 22 on socket 1 00:05:10.108 EAL: Detected lcore 105 as core 24 on socket 1 00:05:10.108 EAL: Detected lcore 106 as core 25 on socket 1 00:05:10.108 EAL: Detected lcore 107 as core 26 on socket 1 00:05:10.108 EAL: Detected lcore 108 as core 27 on socket 1 00:05:10.108 EAL: Detected lcore 109 as core 28 on socket 1 00:05:10.108 EAL: Detected lcore 110 as core 29 on socket 1 00:05:10.108 EAL: Detected lcore 111 as core 30 on socket 1 00:05:10.108 EAL: Maximum logical cores by configuration: 128 00:05:10.108 EAL: Detected CPU lcores: 112 00:05:10.108 EAL: Detected NUMA nodes: 2 00:05:10.108 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:10.108 EAL: Detected shared linkage of DPDK 00:05:10.108 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.108 EAL: Bus pci wants IOVA as 'DC' 00:05:10.108 EAL: Buses did not request a specific IOVA mode. 00:05:10.108 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.108 EAL: Selected IOVA mode 'VA' 00:05:10.108 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.108 EAL: Probing VFIO support... 00:05:10.109 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.109 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.109 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.109 EAL: VFIO support initialized 00:05:10.109 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.109 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.109 EAL: Setting up physically contiguous memory... 00:05:10.109 EAL: Setting maximum number of open files to 524288 00:05:10.109 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.109 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.109 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.109 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.109 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.109 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.109 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.109 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.109 EAL: Hugepages will be freed exactly as allocated. 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: TSC frequency is ~2500000 KHz 00:05:10.109 EAL: Main lcore 0 is ready (tid=7f9260617a00;cpuset=[0]) 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 0 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.109 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.109 00:05:10.109 00:05:10.109 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.109 http://cunit.sourceforge.net/ 00:05:10.109 00:05:10.109 00:05:10.109 Suite: components_suite 00:05:10.109 Test: vtophys_malloc_test ...passed 00:05:10.109 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.109 EAL: Trying to obtain current memory policy. 00:05:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.109 EAL: Restoring previous memory policy: 4 00:05:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.109 EAL: request: mp_malloc_sync 00:05:10.109 EAL: No shared files mode enabled, IPC is disabled 00:05:10.109 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.368 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.368 EAL: request: mp_malloc_sync 00:05:10.368 EAL: No shared files mode enabled, IPC is disabled 00:05:10.368 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.368 EAL: Trying to obtain current memory policy. 00:05:10.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.368 EAL: Restoring previous memory policy: 4 00:05:10.368 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.368 EAL: request: mp_malloc_sync 00:05:10.368 EAL: No shared files mode enabled, IPC is disabled 00:05:10.368 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.368 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.627 EAL: request: mp_malloc_sync 00:05:10.627 EAL: No shared files mode enabled, IPC is disabled 00:05:10.627 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.627 EAL: Trying to obtain current memory policy. 00:05:10.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.627 EAL: Restoring previous memory policy: 4 00:05:10.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.627 EAL: request: mp_malloc_sync 00:05:10.627 EAL: No shared files mode enabled, IPC is disabled 00:05:10.627 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.145 EAL: request: mp_malloc_sync 00:05:11.145 EAL: No shared files mode enabled, IPC is disabled 00:05:11.145 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.145 passed 00:05:11.145 00:05:11.145 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.145 suites 1 1 n/a 0 0 00:05:11.145 tests 2 2 2 0 0 00:05:11.145 asserts 497 497 497 0 n/a 00:05:11.145 00:05:11.145 Elapsed time = 0.965 seconds 00:05:11.145 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.145 EAL: request: mp_malloc_sync 00:05:11.145 EAL: No shared files mode enabled, IPC is disabled 00:05:11.145 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.145 EAL: No shared files mode enabled, IPC is disabled 00:05:11.145 EAL: No shared files mode enabled, IPC is disabled 00:05:11.145 EAL: No shared files mode enabled, IPC is disabled 00:05:11.145 00:05:11.145 real 0m1.090s 00:05:11.145 user 0m0.638s 00:05:11.145 sys 0m0.423s 00:05:11.145 17:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.145 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.145 ************************************ 00:05:11.145 END TEST env_vtophys 00:05:11.145 ************************************ 00:05:11.145 17:14:30 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.145 17:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.145 17:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.145 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.145 ************************************ 00:05:11.145 START TEST env_pci 00:05:11.145 ************************************ 00:05:11.145 17:14:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.145 00:05:11.145 00:05:11.145 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.145 http://cunit.sourceforge.net/ 00:05:11.145 00:05:11.145 00:05:11.145 Suite: pci 00:05:11.145 Test: pci_hook ...[2024-11-09 17:14:30.760200] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2516316 has claimed it 00:05:11.145 EAL: Cannot find device (10000:00:01.0) 00:05:11.145 EAL: Failed to attach device on primary process 00:05:11.145 passed 00:05:11.145 00:05:11.145 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.145 suites 1 1 n/a 0 0 00:05:11.145 tests 1 1 1 0 0 00:05:11.145 asserts 25 25 25 0 n/a 00:05:11.145 00:05:11.145 Elapsed time = 0.032 seconds 00:05:11.145 00:05:11.145 real 0m0.053s 00:05:11.145 user 0m0.015s 00:05:11.145 sys 0m0.038s 00:05:11.145 17:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.145 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.145 ************************************ 00:05:11.145 END TEST env_pci 00:05:11.145 ************************************ 00:05:11.145 17:14:30 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.145 17:14:30 -- env/env.sh@15 -- # uname 00:05:11.145 17:14:30 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.145 17:14:30 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.145 17:14:30 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.145 17:14:30 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:11.145 17:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.145 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.145 ************************************ 00:05:11.145 START TEST env_dpdk_post_init 00:05:11.145 ************************************ 00:05:11.145 17:14:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.145 EAL: Detected CPU lcores: 112 00:05:11.145 EAL: Detected NUMA nodes: 2 00:05:11.145 EAL: Detected shared linkage of DPDK 00:05:11.145 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.145 EAL: Selected IOVA mode 'VA' 00:05:11.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.146 EAL: VFIO support initialized 00:05:11.146 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.404 EAL: Using IOMMU type 1 (Type 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:11.404 EAL: Ignore mapping IO port bar(1) 00:05:11.404 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:12.340 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:16.527 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:16.527 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:16.527 Starting DPDK initialization... 00:05:16.527 Starting SPDK post initialization... 00:05:16.527 SPDK NVMe probe 00:05:16.527 Attaching to 0000:d8:00.0 00:05:16.527 Attached to 0000:d8:00.0 00:05:16.527 Cleaning up... 00:05:16.527 00:05:16.527 real 0m5.353s 00:05:16.527 user 0m4.007s 00:05:16.527 sys 0m0.405s 00:05:16.527 17:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.527 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.527 ************************************ 00:05:16.527 END TEST env_dpdk_post_init 00:05:16.527 ************************************ 00:05:16.527 17:14:36 -- env/env.sh@26 -- # uname 00:05:16.527 17:14:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.527 17:14:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.527 17:14:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.527 17:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.527 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.527 ************************************ 00:05:16.527 START TEST env_mem_callbacks 00:05:16.527 ************************************ 00:05:16.527 17:14:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.527 EAL: Detected CPU lcores: 112 00:05:16.527 EAL: Detected NUMA nodes: 2 00:05:16.527 EAL: Detected shared linkage of DPDK 00:05:16.527 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.527 EAL: Selected IOVA mode 'VA' 00:05:16.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.527 EAL: VFIO support initialized 00:05:16.785 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.785 00:05:16.785 00:05:16.785 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.785 http://cunit.sourceforge.net/ 00:05:16.785 00:05:16.785 00:05:16.785 Suite: memory 00:05:16.785 Test: test ... 00:05:16.785 register 0x200000200000 2097152 00:05:16.785 malloc 3145728 00:05:16.785 register 0x200000400000 4194304 00:05:16.785 buf 0x200000500000 len 3145728 PASSED 00:05:16.785 malloc 64 00:05:16.785 buf 0x2000004fff40 len 64 PASSED 00:05:16.785 malloc 4194304 00:05:16.785 register 0x200000800000 6291456 00:05:16.785 buf 0x200000a00000 len 4194304 PASSED 00:05:16.785 free 0x200000500000 3145728 00:05:16.785 free 0x2000004fff40 64 00:05:16.785 unregister 0x200000400000 4194304 PASSED 00:05:16.785 free 0x200000a00000 4194304 00:05:16.785 unregister 0x200000800000 6291456 PASSED 00:05:16.785 malloc 8388608 00:05:16.785 register 0x200000400000 10485760 00:05:16.785 buf 0x200000600000 len 8388608 PASSED 00:05:16.785 free 0x200000600000 8388608 00:05:16.785 unregister 0x200000400000 10485760 PASSED 00:05:16.785 passed 00:05:16.785 00:05:16.785 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.785 suites 1 1 n/a 0 0 00:05:16.785 tests 1 1 1 0 0 00:05:16.785 asserts 15 15 15 0 n/a 00:05:16.785 00:05:16.785 Elapsed time = 0.004 seconds 00:05:16.785 00:05:16.785 real 0m0.064s 00:05:16.786 user 0m0.023s 00:05:16.786 sys 0m0.040s 00:05:16.786 17:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.786 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.786 ************************************ 00:05:16.786 END TEST env_mem_callbacks 00:05:16.786 ************************************ 00:05:16.786 00:05:16.786 real 0m7.097s 00:05:16.786 user 0m4.973s 00:05:16.786 sys 0m1.206s 00:05:16.786 17:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.786 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.786 ************************************ 00:05:16.786 END TEST env 00:05:16.786 ************************************ 00:05:16.786 17:14:36 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.786 17:14:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.786 17:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.786 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.786 ************************************ 00:05:16.786 START TEST rpc 00:05:16.786 ************************************ 00:05:16.786 17:14:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.786 * Looking for test storage... 00:05:16.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:16.786 17:14:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.786 17:14:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.786 17:14:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.045 17:14:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.045 17:14:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.045 17:14:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.045 17:14:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.045 17:14:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.045 17:14:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.045 17:14:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.045 17:14:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.045 17:14:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.045 17:14:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.045 17:14:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.045 17:14:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.045 17:14:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.045 17:14:36 -- scripts/common.sh@344 -- # : 1 00:05:17.045 17:14:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.045 17:14:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.045 17:14:36 -- scripts/common.sh@364 -- # decimal 1 00:05:17.045 17:14:36 -- scripts/common.sh@352 -- # local d=1 00:05:17.045 17:14:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.045 17:14:36 -- scripts/common.sh@354 -- # echo 1 00:05:17.045 17:14:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.045 17:14:36 -- scripts/common.sh@365 -- # decimal 2 00:05:17.045 17:14:36 -- scripts/common.sh@352 -- # local d=2 00:05:17.045 17:14:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.045 17:14:36 -- scripts/common.sh@354 -- # echo 2 00:05:17.045 17:14:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.045 17:14:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.045 17:14:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.045 17:14:36 -- scripts/common.sh@367 -- # return 0 00:05:17.045 17:14:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.045 17:14:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.045 --rc genhtml_branch_coverage=1 00:05:17.045 --rc genhtml_function_coverage=1 00:05:17.045 --rc genhtml_legend=1 00:05:17.045 --rc geninfo_all_blocks=1 00:05:17.045 --rc geninfo_unexecuted_blocks=1 00:05:17.045 00:05:17.045 ' 00:05:17.045 17:14:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.045 --rc genhtml_branch_coverage=1 00:05:17.045 --rc genhtml_function_coverage=1 00:05:17.045 --rc genhtml_legend=1 00:05:17.045 --rc geninfo_all_blocks=1 00:05:17.045 --rc geninfo_unexecuted_blocks=1 00:05:17.045 00:05:17.045 ' 00:05:17.045 17:14:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.045 --rc genhtml_branch_coverage=1 00:05:17.045 --rc genhtml_function_coverage=1 00:05:17.045 --rc genhtml_legend=1 00:05:17.045 --rc geninfo_all_blocks=1 00:05:17.045 --rc geninfo_unexecuted_blocks=1 00:05:17.045 00:05:17.045 ' 00:05:17.045 17:14:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.045 --rc genhtml_branch_coverage=1 00:05:17.045 --rc genhtml_function_coverage=1 00:05:17.045 --rc genhtml_legend=1 00:05:17.045 --rc geninfo_all_blocks=1 00:05:17.045 --rc geninfo_unexecuted_blocks=1 00:05:17.045 00:05:17.045 ' 00:05:17.045 17:14:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:17.045 17:14:36 -- rpc/rpc.sh@65 -- # spdk_pid=2517431 00:05:17.045 17:14:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.045 17:14:36 -- rpc/rpc.sh@67 -- # waitforlisten 2517431 00:05:17.045 17:14:36 -- common/autotest_common.sh@829 -- # '[' -z 2517431 ']' 00:05:17.045 17:14:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.045 17:14:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.045 17:14:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.045 17:14:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.045 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:17.045 [2024-11-09 17:14:36.614357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.045 [2024-11-09 17:14:36.614409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517431 ] 00:05:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.045 [2024-11-09 17:14:36.683342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.045 [2024-11-09 17:14:36.752421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.045 [2024-11-09 17:14:36.752541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.045 [2024-11-09 17:14:36.752551] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2517431' to capture a snapshot of events at runtime. 00:05:17.045 [2024-11-09 17:14:36.752560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2517431 for offline analysis/debug. 00:05:17.045 [2024-11-09 17:14:36.752583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.982 17:14:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.982 17:14:37 -- common/autotest_common.sh@862 -- # return 0 00:05:17.982 17:14:37 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:17.982 17:14:37 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:17.982 17:14:37 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.982 17:14:37 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.982 17:14:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.982 17:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 ************************************ 00:05:17.982 START TEST rpc_integrity 00:05:17.982 ************************************ 00:05:17.982 17:14:37 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:17.982 17:14:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.982 17:14:37 -- rpc/rpc.sh@13 -- # jq length 00:05:17.982 17:14:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.982 17:14:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.982 17:14:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.982 { 00:05:17.982 "name": "Malloc0", 00:05:17.982 "aliases": [ 00:05:17.982 "da1177bb-3cc2-49ed-b5ae-7996dd72c386" 00:05:17.982 ], 00:05:17.982 "product_name": "Malloc disk", 00:05:17.982 "block_size": 512, 00:05:17.982 "num_blocks": 16384, 00:05:17.982 "uuid": "da1177bb-3cc2-49ed-b5ae-7996dd72c386", 00:05:17.982 "assigned_rate_limits": { 00:05:17.982 "rw_ios_per_sec": 0, 00:05:17.982 "rw_mbytes_per_sec": 0, 00:05:17.982 "r_mbytes_per_sec": 0, 00:05:17.982 "w_mbytes_per_sec": 0 00:05:17.982 }, 00:05:17.982 "claimed": false, 00:05:17.982 "zoned": false, 00:05:17.982 "supported_io_types": { 00:05:17.982 "read": true, 00:05:17.982 "write": true, 00:05:17.982 "unmap": true, 00:05:17.982 "write_zeroes": true, 00:05:17.982 "flush": true, 00:05:17.982 "reset": true, 00:05:17.982 "compare": false, 00:05:17.982 "compare_and_write": false, 00:05:17.982 "abort": true, 00:05:17.982 "nvme_admin": false, 00:05:17.982 "nvme_io": false 00:05:17.982 }, 00:05:17.982 "memory_domains": [ 00:05:17.982 { 00:05:17.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.982 "dma_device_type": 2 00:05:17.982 } 00:05:17.982 ], 00:05:17.982 "driver_specific": {} 00:05:17.982 } 00:05:17.982 ]' 00:05:17.982 17:14:37 -- rpc/rpc.sh@17 -- # jq length 00:05:17.982 17:14:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.982 17:14:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 [2024-11-09 17:14:37.571913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.982 [2024-11-09 17:14:37.571945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.982 [2024-11-09 17:14:37.571959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104bf40 00:05:17.982 [2024-11-09 17:14:37.571967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.982 [2024-11-09 17:14:37.572997] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.982 [2024-11-09 17:14:37.573021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.982 Passthru0 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.982 { 00:05:17.982 "name": "Malloc0", 00:05:17.982 "aliases": [ 00:05:17.982 "da1177bb-3cc2-49ed-b5ae-7996dd72c386" 00:05:17.982 ], 00:05:17.982 "product_name": "Malloc disk", 00:05:17.982 "block_size": 512, 00:05:17.982 "num_blocks": 16384, 00:05:17.982 "uuid": "da1177bb-3cc2-49ed-b5ae-7996dd72c386", 00:05:17.982 "assigned_rate_limits": { 00:05:17.982 "rw_ios_per_sec": 0, 00:05:17.982 "rw_mbytes_per_sec": 0, 00:05:17.982 "r_mbytes_per_sec": 0, 00:05:17.982 "w_mbytes_per_sec": 0 00:05:17.982 }, 00:05:17.982 "claimed": true, 00:05:17.982 "claim_type": "exclusive_write", 00:05:17.982 "zoned": false, 00:05:17.982 "supported_io_types": { 00:05:17.982 "read": true, 00:05:17.982 "write": true, 00:05:17.982 "unmap": true, 00:05:17.982 "write_zeroes": true, 00:05:17.982 "flush": true, 00:05:17.982 "reset": true, 00:05:17.982 "compare": false, 00:05:17.982 "compare_and_write": false, 00:05:17.982 "abort": true, 00:05:17.982 "nvme_admin": false, 00:05:17.982 "nvme_io": false 00:05:17.982 }, 00:05:17.982 "memory_domains": [ 00:05:17.982 { 00:05:17.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.982 "dma_device_type": 2 00:05:17.982 } 00:05:17.982 ], 00:05:17.982 "driver_specific": {} 00:05:17.982 }, 00:05:17.982 { 00:05:17.982 "name": "Passthru0", 00:05:17.982 "aliases": [ 00:05:17.982 "c8f4bc62-d4d9-5864-964c-c4c29eadcac6" 00:05:17.982 ], 00:05:17.982 "product_name": "passthru", 00:05:17.982 "block_size": 512, 00:05:17.982 "num_blocks": 16384, 00:05:17.982 "uuid": "c8f4bc62-d4d9-5864-964c-c4c29eadcac6", 00:05:17.982 "assigned_rate_limits": { 00:05:17.982 "rw_ios_per_sec": 0, 00:05:17.982 "rw_mbytes_per_sec": 0, 00:05:17.982 "r_mbytes_per_sec": 0, 00:05:17.982 "w_mbytes_per_sec": 0 00:05:17.982 }, 00:05:17.982 "claimed": false, 00:05:17.982 "zoned": false, 00:05:17.982 "supported_io_types": { 00:05:17.982 "read": true, 00:05:17.982 "write": true, 00:05:17.982 "unmap": true, 00:05:17.982 "write_zeroes": true, 00:05:17.982 "flush": true, 00:05:17.982 "reset": true, 00:05:17.982 "compare": false, 00:05:17.982 "compare_and_write": false, 00:05:17.982 "abort": true, 00:05:17.982 "nvme_admin": false, 00:05:17.982 "nvme_io": false 00:05:17.982 }, 00:05:17.982 "memory_domains": [ 00:05:17.982 { 00:05:17.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.982 "dma_device_type": 2 00:05:17.982 } 00:05:17.982 ], 00:05:17.982 "driver_specific": { 00:05:17.982 "passthru": { 00:05:17.982 "name": "Passthru0", 00:05:17.982 "base_bdev_name": "Malloc0" 00:05:17.982 } 00:05:17.982 } 00:05:17.982 } 00:05:17.982 ]' 00:05:17.982 17:14:37 -- rpc/rpc.sh@21 -- # jq length 00:05:17.982 17:14:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.982 17:14:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.982 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.982 17:14:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.982 17:14:37 -- rpc/rpc.sh@26 -- # jq length 00:05:17.982 17:14:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.982 00:05:17.982 real 0m0.270s 00:05:17.982 user 0m0.163s 00:05:17.982 sys 0m0.046s 00:05:17.982 17:14:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.982 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.982 ************************************ 00:05:17.982 END TEST rpc_integrity 00:05:17.983 ************************************ 00:05:17.983 17:14:37 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.983 17:14:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.983 17:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.983 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 ************************************ 00:05:18.241 START TEST rpc_plugins 00:05:18.241 ************************************ 00:05:18.241 17:14:37 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:18.241 17:14:37 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.241 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.241 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.241 17:14:37 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.241 17:14:37 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.241 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.241 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.241 17:14:37 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.241 { 00:05:18.241 "name": "Malloc1", 00:05:18.241 "aliases": [ 00:05:18.241 "813c9b4b-ceeb-4a3b-a973-49a3546a7b5f" 00:05:18.241 ], 00:05:18.241 "product_name": "Malloc disk", 00:05:18.241 "block_size": 4096, 00:05:18.241 "num_blocks": 256, 00:05:18.241 "uuid": "813c9b4b-ceeb-4a3b-a973-49a3546a7b5f", 00:05:18.241 "assigned_rate_limits": { 00:05:18.241 "rw_ios_per_sec": 0, 00:05:18.241 "rw_mbytes_per_sec": 0, 00:05:18.241 "r_mbytes_per_sec": 0, 00:05:18.241 "w_mbytes_per_sec": 0 00:05:18.241 }, 00:05:18.241 "claimed": false, 00:05:18.241 "zoned": false, 00:05:18.241 "supported_io_types": { 00:05:18.241 "read": true, 00:05:18.241 "write": true, 00:05:18.241 "unmap": true, 00:05:18.241 "write_zeroes": true, 00:05:18.241 "flush": true, 00:05:18.241 "reset": true, 00:05:18.241 "compare": false, 00:05:18.241 "compare_and_write": false, 00:05:18.241 "abort": true, 00:05:18.241 "nvme_admin": false, 00:05:18.241 "nvme_io": false 00:05:18.241 }, 00:05:18.241 "memory_domains": [ 00:05:18.241 { 00:05:18.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.241 "dma_device_type": 2 00:05:18.241 } 00:05:18.241 ], 00:05:18.241 "driver_specific": {} 00:05:18.241 } 00:05:18.241 ]' 00:05:18.241 17:14:37 -- rpc/rpc.sh@32 -- # jq length 00:05:18.241 17:14:37 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.241 17:14:37 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.241 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.241 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.241 17:14:37 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.241 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.241 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.242 17:14:37 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.242 17:14:37 -- rpc/rpc.sh@36 -- # jq length 00:05:18.242 17:14:37 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.242 00:05:18.242 real 0m0.139s 00:05:18.242 user 0m0.086s 00:05:18.242 sys 0m0.017s 00:05:18.242 17:14:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.242 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.242 ************************************ 00:05:18.242 END TEST rpc_plugins 00:05:18.242 ************************************ 00:05:18.242 17:14:37 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.242 17:14:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.242 17:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.242 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.242 ************************************ 00:05:18.242 START TEST rpc_trace_cmd_test 00:05:18.242 ************************************ 00:05:18.242 17:14:37 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:18.242 17:14:37 -- rpc/rpc.sh@40 -- # local info 00:05:18.242 17:14:37 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.242 17:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.242 17:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:18.242 17:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.242 17:14:37 -- rpc/rpc.sh@42 -- # info='{ 00:05:18.242 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2517431", 00:05:18.242 "tpoint_group_mask": "0x8", 00:05:18.242 "iscsi_conn": { 00:05:18.242 "mask": "0x2", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "scsi": { 00:05:18.242 "mask": "0x4", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "bdev": { 00:05:18.242 "mask": "0x8", 00:05:18.242 "tpoint_mask": "0xffffffffffffffff" 00:05:18.242 }, 00:05:18.242 "nvmf_rdma": { 00:05:18.242 "mask": "0x10", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "nvmf_tcp": { 00:05:18.242 "mask": "0x20", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "ftl": { 00:05:18.242 "mask": "0x40", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "blobfs": { 00:05:18.242 "mask": "0x80", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "dsa": { 00:05:18.242 "mask": "0x200", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "thread": { 00:05:18.242 "mask": "0x400", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "nvme_pcie": { 00:05:18.242 "mask": "0x800", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "iaa": { 00:05:18.242 "mask": "0x1000", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "nvme_tcp": { 00:05:18.242 "mask": "0x2000", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 }, 00:05:18.242 "bdev_nvme": { 00:05:18.242 "mask": "0x4000", 00:05:18.242 "tpoint_mask": "0x0" 00:05:18.242 } 00:05:18.242 }' 00:05:18.242 17:14:37 -- rpc/rpc.sh@43 -- # jq length 00:05:18.242 17:14:38 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:18.242 17:14:38 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.510 17:14:38 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.510 17:14:38 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.510 17:14:38 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.510 17:14:38 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.510 17:14:38 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.510 17:14:38 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.510 17:14:38 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.510 00:05:18.510 real 0m0.211s 00:05:18.510 user 0m0.168s 00:05:18.510 sys 0m0.035s 00:05:18.510 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.510 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.510 ************************************ 00:05:18.510 END TEST rpc_trace_cmd_test 00:05:18.510 ************************************ 00:05:18.510 17:14:38 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.510 17:14:38 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.510 17:14:38 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.510 17:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.510 17:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.510 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.510 ************************************ 00:05:18.510 START TEST rpc_daemon_integrity 00:05:18.510 ************************************ 00:05:18.510 17:14:38 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:18.510 17:14:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.510 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.510 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.510 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.510 17:14:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.510 17:14:38 -- rpc/rpc.sh@13 -- # jq length 00:05:18.510 17:14:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.510 17:14:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.510 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.510 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.511 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.511 17:14:38 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.511 17:14:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.511 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.511 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.511 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.511 17:14:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.511 { 00:05:18.511 "name": "Malloc2", 00:05:18.511 "aliases": [ 00:05:18.511 "7c28b6da-db76-495d-aec9-a64a0966c31a" 00:05:18.511 ], 00:05:18.511 "product_name": "Malloc disk", 00:05:18.511 "block_size": 512, 00:05:18.511 "num_blocks": 16384, 00:05:18.511 "uuid": "7c28b6da-db76-495d-aec9-a64a0966c31a", 00:05:18.511 "assigned_rate_limits": { 00:05:18.511 "rw_ios_per_sec": 0, 00:05:18.511 "rw_mbytes_per_sec": 0, 00:05:18.511 "r_mbytes_per_sec": 0, 00:05:18.511 "w_mbytes_per_sec": 0 00:05:18.511 }, 00:05:18.511 "claimed": false, 00:05:18.511 "zoned": false, 00:05:18.511 "supported_io_types": { 00:05:18.511 "read": true, 00:05:18.511 "write": true, 00:05:18.511 "unmap": true, 00:05:18.511 "write_zeroes": true, 00:05:18.511 "flush": true, 00:05:18.511 "reset": true, 00:05:18.511 "compare": false, 00:05:18.511 "compare_and_write": false, 00:05:18.511 "abort": true, 00:05:18.511 "nvme_admin": false, 00:05:18.511 "nvme_io": false 00:05:18.511 }, 00:05:18.511 "memory_domains": [ 00:05:18.511 { 00:05:18.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.511 "dma_device_type": 2 00:05:18.511 } 00:05:18.511 ], 00:05:18.511 "driver_specific": {} 00:05:18.511 } 00:05:18.511 ]' 00:05:18.511 17:14:38 -- rpc/rpc.sh@17 -- # jq length 00:05:18.777 17:14:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.777 17:14:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.777 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.777 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 [2024-11-09 17:14:38.313908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.777 [2024-11-09 17:14:38.313939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.777 [2024-11-09 17:14:38.313957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104d740 00:05:18.777 [2024-11-09 17:14:38.313965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.777 [2024-11-09 17:14:38.314881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.777 [2024-11-09 17:14:38.314904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.777 Passthru0 00:05:18.777 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.777 17:14:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.777 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.777 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.777 17:14:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.777 { 00:05:18.777 "name": "Malloc2", 00:05:18.777 "aliases": [ 00:05:18.777 "7c28b6da-db76-495d-aec9-a64a0966c31a" 00:05:18.777 ], 00:05:18.777 "product_name": "Malloc disk", 00:05:18.777 "block_size": 512, 00:05:18.777 "num_blocks": 16384, 00:05:18.777 "uuid": "7c28b6da-db76-495d-aec9-a64a0966c31a", 00:05:18.777 "assigned_rate_limits": { 00:05:18.777 "rw_ios_per_sec": 0, 00:05:18.777 "rw_mbytes_per_sec": 0, 00:05:18.777 "r_mbytes_per_sec": 0, 00:05:18.777 "w_mbytes_per_sec": 0 00:05:18.777 }, 00:05:18.777 "claimed": true, 00:05:18.777 "claim_type": "exclusive_write", 00:05:18.777 "zoned": false, 00:05:18.777 "supported_io_types": { 00:05:18.777 "read": true, 00:05:18.777 "write": true, 00:05:18.777 "unmap": true, 00:05:18.777 "write_zeroes": true, 00:05:18.777 "flush": true, 00:05:18.777 "reset": true, 00:05:18.777 "compare": false, 00:05:18.777 "compare_and_write": false, 00:05:18.777 "abort": true, 00:05:18.777 "nvme_admin": false, 00:05:18.777 "nvme_io": false 00:05:18.777 }, 00:05:18.777 "memory_domains": [ 00:05:18.777 { 00:05:18.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.777 "dma_device_type": 2 00:05:18.777 } 00:05:18.777 ], 00:05:18.777 "driver_specific": {} 00:05:18.777 }, 00:05:18.777 { 00:05:18.777 "name": "Passthru0", 00:05:18.777 "aliases": [ 00:05:18.777 "bf498288-3e7c-5dd1-9b4a-4aa942cfb373" 00:05:18.777 ], 00:05:18.777 "product_name": "passthru", 00:05:18.777 "block_size": 512, 00:05:18.777 "num_blocks": 16384, 00:05:18.777 "uuid": "bf498288-3e7c-5dd1-9b4a-4aa942cfb373", 00:05:18.777 "assigned_rate_limits": { 00:05:18.777 "rw_ios_per_sec": 0, 00:05:18.777 "rw_mbytes_per_sec": 0, 00:05:18.777 "r_mbytes_per_sec": 0, 00:05:18.777 "w_mbytes_per_sec": 0 00:05:18.777 }, 00:05:18.777 "claimed": false, 00:05:18.777 "zoned": false, 00:05:18.777 "supported_io_types": { 00:05:18.777 "read": true, 00:05:18.777 "write": true, 00:05:18.777 "unmap": true, 00:05:18.777 "write_zeroes": true, 00:05:18.777 "flush": true, 00:05:18.777 "reset": true, 00:05:18.777 "compare": false, 00:05:18.777 "compare_and_write": false, 00:05:18.777 "abort": true, 00:05:18.777 "nvme_admin": false, 00:05:18.777 "nvme_io": false 00:05:18.777 }, 00:05:18.777 "memory_domains": [ 00:05:18.777 { 00:05:18.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.777 "dma_device_type": 2 00:05:18.777 } 00:05:18.777 ], 00:05:18.777 "driver_specific": { 00:05:18.777 "passthru": { 00:05:18.777 "name": "Passthru0", 00:05:18.777 "base_bdev_name": "Malloc2" 00:05:18.777 } 00:05:18.777 } 00:05:18.777 } 00:05:18.777 ]' 00:05:18.777 17:14:38 -- rpc/rpc.sh@21 -- # jq length 00:05:18.777 17:14:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.777 17:14:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.777 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.777 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.777 17:14:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.777 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.777 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.777 17:14:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.777 17:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.777 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.777 17:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.777 17:14:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.777 17:14:38 -- rpc/rpc.sh@26 -- # jq length 00:05:18.778 17:14:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.778 00:05:18.778 real 0m0.273s 00:05:18.778 user 0m0.161s 00:05:18.778 sys 0m0.052s 00:05:18.778 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.778 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.778 ************************************ 00:05:18.778 END TEST rpc_daemon_integrity 00:05:18.778 ************************************ 00:05:18.778 17:14:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.778 17:14:38 -- rpc/rpc.sh@84 -- # killprocess 2517431 00:05:18.778 17:14:38 -- common/autotest_common.sh@936 -- # '[' -z 2517431 ']' 00:05:18.778 17:14:38 -- common/autotest_common.sh@940 -- # kill -0 2517431 00:05:18.778 17:14:38 -- common/autotest_common.sh@941 -- # uname 00:05:18.778 17:14:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.778 17:14:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2517431 00:05:19.036 17:14:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.036 17:14:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.036 17:14:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2517431' 00:05:19.036 killing process with pid 2517431 00:05:19.036 17:14:38 -- common/autotest_common.sh@955 -- # kill 2517431 00:05:19.036 17:14:38 -- common/autotest_common.sh@960 -- # wait 2517431 00:05:19.295 00:05:19.295 real 0m2.503s 00:05:19.295 user 0m3.104s 00:05:19.295 sys 0m0.744s 00:05:19.295 17:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.295 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.295 ************************************ 00:05:19.295 END TEST rpc 00:05:19.295 ************************************ 00:05:19.295 17:14:38 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.295 17:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.295 17:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.295 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.295 ************************************ 00:05:19.295 START TEST rpc_client 00:05:19.295 ************************************ 00:05:19.295 17:14:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.295 * Looking for test storage... 00:05:19.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:19.295 17:14:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.295 17:14:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.295 17:14:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.554 17:14:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.554 17:14:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.554 17:14:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.554 17:14:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.554 17:14:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.554 17:14:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.554 17:14:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.554 17:14:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.554 17:14:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.554 17:14:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.554 17:14:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.554 17:14:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.554 17:14:39 -- scripts/common.sh@344 -- # : 1 00:05:19.554 17:14:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.554 17:14:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.554 17:14:39 -- scripts/common.sh@364 -- # decimal 1 00:05:19.554 17:14:39 -- scripts/common.sh@352 -- # local d=1 00:05:19.554 17:14:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.554 17:14:39 -- scripts/common.sh@354 -- # echo 1 00:05:19.554 17:14:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.554 17:14:39 -- scripts/common.sh@365 -- # decimal 2 00:05:19.554 17:14:39 -- scripts/common.sh@352 -- # local d=2 00:05:19.554 17:14:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.554 17:14:39 -- scripts/common.sh@354 -- # echo 2 00:05:19.554 17:14:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.554 17:14:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.554 17:14:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.554 17:14:39 -- scripts/common.sh@367 -- # return 0 00:05:19.554 17:14:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.554 --rc genhtml_branch_coverage=1 00:05:19.554 --rc genhtml_function_coverage=1 00:05:19.554 --rc genhtml_legend=1 00:05:19.554 --rc geninfo_all_blocks=1 00:05:19.554 --rc geninfo_unexecuted_blocks=1 00:05:19.554 00:05:19.554 ' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.554 --rc genhtml_branch_coverage=1 00:05:19.554 --rc genhtml_function_coverage=1 00:05:19.554 --rc genhtml_legend=1 00:05:19.554 --rc geninfo_all_blocks=1 00:05:19.554 --rc geninfo_unexecuted_blocks=1 00:05:19.554 00:05:19.554 ' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.554 --rc genhtml_branch_coverage=1 00:05:19.554 --rc genhtml_function_coverage=1 00:05:19.554 --rc genhtml_legend=1 00:05:19.554 --rc geninfo_all_blocks=1 00:05:19.554 --rc geninfo_unexecuted_blocks=1 00:05:19.554 00:05:19.554 ' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.554 --rc genhtml_branch_coverage=1 00:05:19.554 --rc genhtml_function_coverage=1 00:05:19.554 --rc genhtml_legend=1 00:05:19.554 --rc geninfo_all_blocks=1 00:05:19.554 --rc geninfo_unexecuted_blocks=1 00:05:19.554 00:05:19.554 ' 00:05:19.554 17:14:39 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:19.554 OK 00:05:19.554 17:14:39 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.554 00:05:19.554 real 0m0.215s 00:05:19.554 user 0m0.128s 00:05:19.554 sys 0m0.104s 00:05:19.554 17:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.554 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.554 ************************************ 00:05:19.554 END TEST rpc_client 00:05:19.554 ************************************ 00:05:19.554 17:14:39 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.554 17:14:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.554 17:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.554 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.554 ************************************ 00:05:19.554 START TEST json_config 00:05:19.554 ************************************ 00:05:19.554 17:14:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.554 17:14:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.554 17:14:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.554 17:14:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.814 17:14:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.814 17:14:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.814 17:14:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.814 17:14:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.814 17:14:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.814 17:14:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.814 17:14:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.814 17:14:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.814 17:14:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.814 17:14:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.814 17:14:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.814 17:14:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.814 17:14:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.814 17:14:39 -- scripts/common.sh@344 -- # : 1 00:05:19.814 17:14:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.814 17:14:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.814 17:14:39 -- scripts/common.sh@364 -- # decimal 1 00:05:19.814 17:14:39 -- scripts/common.sh@352 -- # local d=1 00:05:19.814 17:14:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.814 17:14:39 -- scripts/common.sh@354 -- # echo 1 00:05:19.814 17:14:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.814 17:14:39 -- scripts/common.sh@365 -- # decimal 2 00:05:19.814 17:14:39 -- scripts/common.sh@352 -- # local d=2 00:05:19.814 17:14:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.814 17:14:39 -- scripts/common.sh@354 -- # echo 2 00:05:19.814 17:14:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.814 17:14:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.814 17:14:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.814 17:14:39 -- scripts/common.sh@367 -- # return 0 00:05:19.814 17:14:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.814 17:14:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.814 --rc genhtml_branch_coverage=1 00:05:19.814 --rc genhtml_function_coverage=1 00:05:19.814 --rc genhtml_legend=1 00:05:19.814 --rc geninfo_all_blocks=1 00:05:19.814 --rc geninfo_unexecuted_blocks=1 00:05:19.814 00:05:19.814 ' 00:05:19.814 17:14:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.814 --rc genhtml_branch_coverage=1 00:05:19.814 --rc genhtml_function_coverage=1 00:05:19.814 --rc genhtml_legend=1 00:05:19.814 --rc geninfo_all_blocks=1 00:05:19.814 --rc geninfo_unexecuted_blocks=1 00:05:19.814 00:05:19.814 ' 00:05:19.814 17:14:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.814 --rc genhtml_branch_coverage=1 00:05:19.814 --rc genhtml_function_coverage=1 00:05:19.814 --rc genhtml_legend=1 00:05:19.814 --rc geninfo_all_blocks=1 00:05:19.814 --rc geninfo_unexecuted_blocks=1 00:05:19.814 00:05:19.814 ' 00:05:19.814 17:14:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.814 --rc genhtml_branch_coverage=1 00:05:19.814 --rc genhtml_function_coverage=1 00:05:19.814 --rc genhtml_legend=1 00:05:19.814 --rc geninfo_all_blocks=1 00:05:19.814 --rc geninfo_unexecuted_blocks=1 00:05:19.814 00:05:19.814 ' 00:05:19.814 17:14:39 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.814 17:14:39 -- nvmf/common.sh@7 -- # uname -s 00:05:19.814 17:14:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.814 17:14:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.814 17:14:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.814 17:14:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.814 17:14:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.814 17:14:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.814 17:14:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.814 17:14:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.814 17:14:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.814 17:14:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.814 17:14:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:19.814 17:14:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:19.814 17:14:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.814 17:14:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.814 17:14:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.814 17:14:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:19.814 17:14:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.814 17:14:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.814 17:14:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.814 17:14:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.814 17:14:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.814 17:14:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.814 17:14:39 -- paths/export.sh@5 -- # export PATH 00:05:19.815 17:14:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.815 17:14:39 -- nvmf/common.sh@46 -- # : 0 00:05:19.815 17:14:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.815 17:14:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.815 17:14:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.815 17:14:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.815 17:14:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.815 17:14:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.815 17:14:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.815 17:14:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.815 17:14:39 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.815 17:14:39 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.815 17:14:39 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:19.815 17:14:39 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.815 17:14:39 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:19.815 17:14:39 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.815 17:14:39 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:19.815 17:14:39 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.815 17:14:39 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:19.815 17:14:39 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:19.815 17:14:39 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.815 17:14:39 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:19.815 INFO: JSON configuration test init 00:05:19.815 17:14:39 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:19.815 17:14:39 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:19.815 17:14:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.815 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.815 17:14:39 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:19.815 17:14:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.815 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.815 17:14:39 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.815 17:14:39 -- json_config/json_config.sh@98 -- # local app=target 00:05:19.815 17:14:39 -- json_config/json_config.sh@99 -- # shift 00:05:19.815 17:14:39 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:19.815 17:14:39 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.815 17:14:39 -- json_config/json_config.sh@111 -- # app_pid[$app]=2518090 00:05:19.815 17:14:39 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:19.815 Waiting for target to run... 00:05:19.815 17:14:39 -- json_config/json_config.sh@114 -- # waitforlisten 2518090 /var/tmp/spdk_tgt.sock 00:05:19.815 17:14:39 -- common/autotest_common.sh@829 -- # '[' -z 2518090 ']' 00:05:19.815 17:14:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.815 17:14:39 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.815 17:14:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.815 17:14:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.815 17:14:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.815 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:19.815 [2024-11-09 17:14:39.461919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.815 [2024-11-09 17:14:39.461975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518090 ] 00:05:19.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.382 [2024-11-09 17:14:39.910673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.382 [2024-11-09 17:14:39.991473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.382 [2024-11-09 17:14:39.991580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.640 17:14:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.640 17:14:40 -- common/autotest_common.sh@862 -- # return 0 00:05:20.640 17:14:40 -- json_config/json_config.sh@115 -- # echo '' 00:05:20.640 00:05:20.640 17:14:40 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:20.640 17:14:40 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:20.640 17:14:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.640 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.640 17:14:40 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:20.640 17:14:40 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:20.640 17:14:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.640 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.640 17:14:40 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.640 17:14:40 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:20.640 17:14:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:24.033 17:14:43 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:24.033 17:14:43 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:24.033 17:14:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.034 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 17:14:43 -- json_config/json_config.sh@48 -- # local ret=0 00:05:24.034 17:14:43 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:24.034 17:14:43 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:24.034 17:14:43 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:24.034 17:14:43 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:24.034 17:14:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:24.034 17:14:43 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:24.034 17:14:43 -- json_config/json_config.sh@51 -- # local get_types 00:05:24.034 17:14:43 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:24.034 17:14:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.034 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 17:14:43 -- json_config/json_config.sh@58 -- # return 0 00:05:24.034 17:14:43 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:24.034 17:14:43 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:24.034 17:14:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.034 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 17:14:43 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:24.034 17:14:43 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:24.034 17:14:43 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:24.034 17:14:43 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:24.034 17:14:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:24.034 17:14:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.034 17:14:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:24.034 17:14:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:24.034 17:14:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:24.034 17:14:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.034 17:14:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:24.034 17:14:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.034 17:14:43 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:24.034 17:14:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:24.034 17:14:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:24.034 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.594 17:14:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:30.594 17:14:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:30.594 17:14:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:30.594 17:14:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:30.594 17:14:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:30.594 17:14:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:30.594 17:14:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:30.594 17:14:50 -- nvmf/common.sh@294 -- # net_devs=() 00:05:30.594 17:14:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:30.594 17:14:50 -- nvmf/common.sh@295 -- # e810=() 00:05:30.594 17:14:50 -- nvmf/common.sh@295 -- # local -ga e810 00:05:30.594 17:14:50 -- nvmf/common.sh@296 -- # x722=() 00:05:30.594 17:14:50 -- nvmf/common.sh@296 -- # local -ga x722 00:05:30.594 17:14:50 -- nvmf/common.sh@297 -- # mlx=() 00:05:30.594 17:14:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:30.594 17:14:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:30.594 17:14:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:30.594 17:14:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:30.595 17:14:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:30.595 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:30.595 17:14:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:30.595 17:14:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:30.595 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:30.595 17:14:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:30.595 17:14:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.595 17:14:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.595 17:14:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:30.595 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:30.595 17:14:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.595 17:14:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.595 17:14:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:30.595 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:30.595 17:14:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.595 17:14:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:30.595 17:14:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:30.595 17:14:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:30.595 17:14:50 -- nvmf/common.sh@57 -- # uname 00:05:30.595 17:14:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:30.595 17:14:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:30.595 17:14:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:30.595 17:14:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:30.595 17:14:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:30.595 17:14:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:30.595 17:14:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:30.595 17:14:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:30.595 17:14:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:30.595 17:14:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:30.595 17:14:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:30.595 17:14:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:30.595 17:14:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:30.595 17:14:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:30.595 17:14:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:30.595 17:14:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:30.595 17:14:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:30.595 17:14:50 -- nvmf/common.sh@104 -- # continue 2 00:05:30.595 17:14:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.595 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:30.595 17:14:50 -- nvmf/common.sh@104 -- # continue 2 00:05:30.595 17:14:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:30.595 17:14:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:30.595 17:14:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:30.595 17:14:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:30.595 17:14:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:30.595 17:14:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:30.595 17:14:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:30.595 17:14:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:30.595 17:14:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:30.854 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:30.854 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:30.854 altname enp217s0f0np0 00:05:30.854 altname ens818f0np0 00:05:30.854 inet 192.168.100.8/24 scope global mlx_0_0 00:05:30.854 valid_lft forever preferred_lft forever 00:05:30.854 17:14:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:30.854 17:14:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:30.854 17:14:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:30.854 17:14:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:30.854 17:14:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:30.854 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:30.854 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:30.854 altname enp217s0f1np1 00:05:30.854 altname ens818f1np1 00:05:30.854 inet 192.168.100.9/24 scope global mlx_0_1 00:05:30.854 valid_lft forever preferred_lft forever 00:05:30.854 17:14:50 -- nvmf/common.sh@410 -- # return 0 00:05:30.854 17:14:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:30.854 17:14:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:30.854 17:14:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:30.854 17:14:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:30.854 17:14:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:30.854 17:14:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:30.854 17:14:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:30.854 17:14:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:30.854 17:14:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:30.854 17:14:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:30.854 17:14:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:30.854 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.854 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:30.854 17:14:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:30.854 17:14:50 -- nvmf/common.sh@104 -- # continue 2 00:05:30.854 17:14:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:30.854 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.854 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:30.854 17:14:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:30.854 17:14:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:30.854 17:14:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@104 -- # continue 2 00:05:30.854 17:14:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:30.854 17:14:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:30.854 17:14:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:30.854 17:14:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:30.854 17:14:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:30.854 17:14:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:30.854 17:14:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:30.854 192.168.100.9' 00:05:30.854 17:14:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:30.854 192.168.100.9' 00:05:30.854 17:14:50 -- nvmf/common.sh@445 -- # head -n 1 00:05:30.854 17:14:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:30.854 17:14:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:30.854 192.168.100.9' 00:05:30.854 17:14:50 -- nvmf/common.sh@446 -- # tail -n +2 00:05:30.854 17:14:50 -- nvmf/common.sh@446 -- # head -n 1 00:05:30.854 17:14:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:30.854 17:14:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:30.854 17:14:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:30.854 17:14:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:30.854 17:14:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:30.854 17:14:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:30.854 17:14:50 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:30.854 17:14:50 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.854 17:14:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.112 MallocForNvmf0 00:05:31.112 17:14:50 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.112 17:14:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.112 MallocForNvmf1 00:05:31.112 17:14:50 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:31.112 17:14:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:31.370 [2024-11-09 17:14:51.000534] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:31.370 [2024-11-09 17:14:51.031725] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1636040/0x1642ce0) succeed. 00:05:31.370 [2024-11-09 17:14:51.043329] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1638230/0x1684380) succeed. 00:05:31.370 17:14:51 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.370 17:14:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.628 17:14:51 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.628 17:14:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.886 17:14:51 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.886 17:14:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.886 17:14:51 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:31.886 17:14:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:32.144 [2024-11-09 17:14:51.763438] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:32.144 17:14:51 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:32.144 17:14:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.144 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.144 17:14:51 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:32.144 17:14:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.144 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.144 17:14:51 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:32.144 17:14:51 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.144 17:14:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.402 MallocBdevForConfigChangeCheck 00:05:32.403 17:14:52 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:32.403 17:14:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.403 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:32.403 17:14:52 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:32.403 17:14:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.660 17:14:52 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:32.660 INFO: shutting down applications... 00:05:32.660 17:14:52 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:32.660 17:14:52 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:32.660 17:14:52 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:32.660 17:14:52 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.190 Calling clear_iscsi_subsystem 00:05:35.190 Calling clear_nvmf_subsystem 00:05:35.190 Calling clear_nbd_subsystem 00:05:35.191 Calling clear_ublk_subsystem 00:05:35.191 Calling clear_vhost_blk_subsystem 00:05:35.191 Calling clear_vhost_scsi_subsystem 00:05:35.191 Calling clear_scheduler_subsystem 00:05:35.191 Calling clear_bdev_subsystem 00:05:35.191 Calling clear_accel_subsystem 00:05:35.191 Calling clear_vmd_subsystem 00:05:35.191 Calling clear_sock_subsystem 00:05:35.191 Calling clear_iobuf_subsystem 00:05:35.191 17:14:54 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.191 17:14:54 -- json_config/json_config.sh@396 -- # count=100 00:05:35.191 17:14:54 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:35.191 17:14:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.191 17:14:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.191 17:14:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.758 17:14:55 -- json_config/json_config.sh@398 -- # break 00:05:35.758 17:14:55 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:35.758 17:14:55 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:35.758 17:14:55 -- json_config/json_config.sh@120 -- # local app=target 00:05:35.758 17:14:55 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:35.758 17:14:55 -- json_config/json_config.sh@124 -- # [[ -n 2518090 ]] 00:05:35.758 17:14:55 -- json_config/json_config.sh@127 -- # kill -SIGINT 2518090 00:05:35.758 17:14:55 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:35.758 17:14:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:35.758 17:14:55 -- json_config/json_config.sh@130 -- # kill -0 2518090 00:05:35.758 17:14:55 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:36.017 17:14:55 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:36.017 17:14:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:36.017 17:14:55 -- json_config/json_config.sh@130 -- # kill -0 2518090 00:05:36.017 17:14:55 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:36.017 17:14:55 -- json_config/json_config.sh@132 -- # break 00:05:36.017 17:14:55 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:36.017 17:14:55 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:36.017 SPDK target shutdown done 00:05:36.017 17:14:55 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:36.017 INFO: relaunching applications... 00:05:36.017 17:14:55 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.017 17:14:55 -- json_config/json_config.sh@98 -- # local app=target 00:05:36.017 17:14:55 -- json_config/json_config.sh@99 -- # shift 00:05:36.017 17:14:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:36.017 17:14:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:36.017 17:14:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:36.017 17:14:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.017 17:14:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:36.017 17:14:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=2523155 00:05:36.017 17:14:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:36.017 Waiting for target to run... 00:05:36.017 17:14:55 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.017 17:14:55 -- json_config/json_config.sh@114 -- # waitforlisten 2523155 /var/tmp/spdk_tgt.sock 00:05:36.017 17:14:55 -- common/autotest_common.sh@829 -- # '[' -z 2523155 ']' 00:05:36.017 17:14:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.017 17:14:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.017 17:14:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.017 17:14:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.017 17:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:36.277 [2024-11-09 17:14:55.792197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.277 [2024-11-09 17:14:55.792262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523155 ] 00:05:36.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.536 [2024-11-09 17:14:56.229728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.794 [2024-11-09 17:14:56.313694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.794 [2024-11-09 17:14:56.313803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.077 [2024-11-09 17:14:59.362693] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1647970/0x1605db0) succeed. 00:05:40.077 [2024-11-09 17:14:59.374271] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1649b60/0x14b41d0) succeed. 00:05:40.077 [2024-11-09 17:14:59.422052] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:40.335 17:14:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.335 17:14:59 -- common/autotest_common.sh@862 -- # return 0 00:05:40.335 17:14:59 -- json_config/json_config.sh@115 -- # echo '' 00:05:40.335 00:05:40.335 17:14:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:40.335 17:14:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.335 INFO: Checking if target configuration is the same... 00:05:40.335 17:14:59 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.335 17:14:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:40.335 17:14:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.335 + '[' 2 -ne 2 ']' 00:05:40.335 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.335 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:40.335 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:40.335 +++ basename /dev/fd/62 00:05:40.335 ++ mktemp /tmp/62.XXX 00:05:40.335 + tmp_file_1=/tmp/62.bpc 00:05:40.335 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.335 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.335 + tmp_file_2=/tmp/spdk_tgt_config.json.8I5 00:05:40.335 + ret=0 00:05:40.335 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.593 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.593 + diff -u /tmp/62.bpc /tmp/spdk_tgt_config.json.8I5 00:05:40.593 + echo 'INFO: JSON config files are the same' 00:05:40.593 INFO: JSON config files are the same 00:05:40.593 + rm /tmp/62.bpc /tmp/spdk_tgt_config.json.8I5 00:05:40.593 + exit 0 00:05:40.593 17:15:00 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:40.593 17:15:00 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.593 INFO: changing configuration and checking if this can be detected... 00:05:40.593 17:15:00 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.593 17:15:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.852 17:15:00 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.852 17:15:00 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:40.852 17:15:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.852 + '[' 2 -ne 2 ']' 00:05:40.852 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.852 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:40.852 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:40.852 +++ basename /dev/fd/62 00:05:40.852 ++ mktemp /tmp/62.XXX 00:05:40.852 + tmp_file_1=/tmp/62.UU6 00:05:40.852 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.852 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.852 + tmp_file_2=/tmp/spdk_tgt_config.json.8ob 00:05:40.852 + ret=0 00:05:40.852 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.110 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.110 + diff -u /tmp/62.UU6 /tmp/spdk_tgt_config.json.8ob 00:05:41.110 + ret=1 00:05:41.110 + echo '=== Start of file: /tmp/62.UU6 ===' 00:05:41.110 + cat /tmp/62.UU6 00:05:41.110 + echo '=== End of file: /tmp/62.UU6 ===' 00:05:41.110 + echo '' 00:05:41.110 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8ob ===' 00:05:41.110 + cat /tmp/spdk_tgt_config.json.8ob 00:05:41.110 + echo '=== End of file: /tmp/spdk_tgt_config.json.8ob ===' 00:05:41.110 + echo '' 00:05:41.110 + rm /tmp/62.UU6 /tmp/spdk_tgt_config.json.8ob 00:05:41.110 + exit 1 00:05:41.110 17:15:00 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:41.110 INFO: configuration change detected. 00:05:41.110 17:15:00 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:41.110 17:15:00 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:41.110 17:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.110 17:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.110 17:15:00 -- json_config/json_config.sh@360 -- # local ret=0 00:05:41.110 17:15:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:41.110 17:15:00 -- json_config/json_config.sh@370 -- # [[ -n 2523155 ]] 00:05:41.110 17:15:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:41.110 17:15:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.110 17:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.110 17:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.110 17:15:00 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:41.110 17:15:00 -- json_config/json_config.sh@246 -- # uname -s 00:05:41.110 17:15:00 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:41.110 17:15:00 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:41.110 17:15:00 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:41.110 17:15:00 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.110 17:15:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.110 17:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.110 17:15:00 -- json_config/json_config.sh@376 -- # killprocess 2523155 00:05:41.110 17:15:00 -- common/autotest_common.sh@936 -- # '[' -z 2523155 ']' 00:05:41.110 17:15:00 -- common/autotest_common.sh@940 -- # kill -0 2523155 00:05:41.368 17:15:00 -- common/autotest_common.sh@941 -- # uname 00:05:41.368 17:15:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.368 17:15:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2523155 00:05:41.368 17:15:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.368 17:15:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.368 17:15:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2523155' 00:05:41.368 killing process with pid 2523155 00:05:41.368 17:15:00 -- common/autotest_common.sh@955 -- # kill 2523155 00:05:41.368 17:15:00 -- common/autotest_common.sh@960 -- # wait 2523155 00:05:43.899 17:15:03 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.899 17:15:03 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:43.899 17:15:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.899 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.899 17:15:03 -- json_config/json_config.sh@381 -- # return 0 00:05:43.899 17:15:03 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:43.899 INFO: Success 00:05:43.899 17:15:03 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:43.899 17:15:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:43.899 17:15:03 -- nvmf/common.sh@116 -- # sync 00:05:43.899 17:15:03 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:43.899 17:15:03 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:43.899 17:15:03 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:43.900 17:15:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:43.900 17:15:03 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:43.900 00:05:43.900 real 0m24.200s 00:05:43.900 user 0m26.841s 00:05:43.900 sys 0m7.747s 00:05:43.900 17:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.900 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.900 ************************************ 00:05:43.900 END TEST json_config 00:05:43.900 ************************************ 00:05:43.900 17:15:03 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.900 17:15:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.900 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.900 ************************************ 00:05:43.900 START TEST json_config_extra_key 00:05:43.900 ************************************ 00:05:43.900 17:15:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.900 17:15:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.900 17:15:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.900 17:15:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.900 17:15:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.900 17:15:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.900 17:15:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.900 17:15:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.900 17:15:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.900 17:15:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.900 17:15:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.900 17:15:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.900 17:15:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.900 17:15:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.900 17:15:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.900 17:15:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.900 17:15:03 -- scripts/common.sh@344 -- # : 1 00:05:43.900 17:15:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.900 17:15:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.900 17:15:03 -- scripts/common.sh@364 -- # decimal 1 00:05:43.900 17:15:03 -- scripts/common.sh@352 -- # local d=1 00:05:43.900 17:15:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.900 17:15:03 -- scripts/common.sh@354 -- # echo 1 00:05:43.900 17:15:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.900 17:15:03 -- scripts/common.sh@365 -- # decimal 2 00:05:43.900 17:15:03 -- scripts/common.sh@352 -- # local d=2 00:05:43.900 17:15:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.900 17:15:03 -- scripts/common.sh@354 -- # echo 2 00:05:43.900 17:15:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.900 17:15:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.900 17:15:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.900 17:15:03 -- scripts/common.sh@367 -- # return 0 00:05:43.900 17:15:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.900 --rc genhtml_branch_coverage=1 00:05:43.900 --rc genhtml_function_coverage=1 00:05:43.900 --rc genhtml_legend=1 00:05:43.900 --rc geninfo_all_blocks=1 00:05:43.900 --rc geninfo_unexecuted_blocks=1 00:05:43.900 00:05:43.900 ' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.900 --rc genhtml_branch_coverage=1 00:05:43.900 --rc genhtml_function_coverage=1 00:05:43.900 --rc genhtml_legend=1 00:05:43.900 --rc geninfo_all_blocks=1 00:05:43.900 --rc geninfo_unexecuted_blocks=1 00:05:43.900 00:05:43.900 ' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.900 --rc genhtml_branch_coverage=1 00:05:43.900 --rc genhtml_function_coverage=1 00:05:43.900 --rc genhtml_legend=1 00:05:43.900 --rc geninfo_all_blocks=1 00:05:43.900 --rc geninfo_unexecuted_blocks=1 00:05:43.900 00:05:43.900 ' 00:05:43.900 17:15:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.900 --rc genhtml_branch_coverage=1 00:05:43.900 --rc genhtml_function_coverage=1 00:05:43.900 --rc genhtml_legend=1 00:05:43.900 --rc geninfo_all_blocks=1 00:05:43.900 --rc geninfo_unexecuted_blocks=1 00:05:43.900 00:05:43.900 ' 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.900 17:15:03 -- nvmf/common.sh@7 -- # uname -s 00:05:43.900 17:15:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.900 17:15:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.900 17:15:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.900 17:15:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.900 17:15:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.900 17:15:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.900 17:15:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.900 17:15:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.900 17:15:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.900 17:15:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.900 17:15:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:43.900 17:15:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:43.900 17:15:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.900 17:15:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.900 17:15:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.900 17:15:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:43.900 17:15:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.900 17:15:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.900 17:15:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.900 17:15:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.900 17:15:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.900 17:15:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.900 17:15:03 -- paths/export.sh@5 -- # export PATH 00:05:43.900 17:15:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.900 17:15:03 -- nvmf/common.sh@46 -- # : 0 00:05:43.900 17:15:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.900 17:15:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.900 17:15:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.900 17:15:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.900 17:15:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.900 17:15:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.900 17:15:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.900 17:15:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:43.900 INFO: launching applications... 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2524771 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:43.900 Waiting for target to run... 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2524771 /var/tmp/spdk_tgt.sock 00:05:43.900 17:15:03 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.900 17:15:03 -- common/autotest_common.sh@829 -- # '[' -z 2524771 ']' 00:05:43.900 17:15:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.901 17:15:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.901 17:15:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.901 17:15:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.901 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.160 [2024-11-09 17:15:03.681021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.160 [2024-11-09 17:15:03.681071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524771 ] 00:05:44.160 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.418 [2024-11-09 17:15:03.961453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.418 [2024-11-09 17:15:04.023293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.418 [2024-11-09 17:15:04.023399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.984 17:15:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.984 17:15:04 -- common/autotest_common.sh@862 -- # return 0 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:44.984 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:44.984 INFO: shutting down applications... 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2524771 ]] 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2524771 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2524771 00:05:44.984 17:15:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2524771 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:45.243 SPDK target shutdown done 00:05:45.243 17:15:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:45.243 Success 00:05:45.243 00:05:45.243 real 0m1.544s 00:05:45.243 user 0m1.293s 00:05:45.243 sys 0m0.436s 00:05:45.243 17:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.243 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:05:45.243 ************************************ 00:05:45.243 END TEST json_config_extra_key 00:05:45.243 ************************************ 00:05:45.501 17:15:05 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.501 17:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.501 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.501 ************************************ 00:05:45.501 START TEST alias_rpc 00:05:45.501 ************************************ 00:05:45.501 17:15:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.501 * Looking for test storage... 00:05:45.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.501 17:15:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:45.501 17:15:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:45.501 17:15:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:45.501 17:15:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:45.501 17:15:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:45.501 17:15:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:45.501 17:15:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:45.501 17:15:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:45.501 17:15:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.501 17:15:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:45.501 17:15:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:45.501 17:15:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:45.501 17:15:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:45.501 17:15:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:45.501 17:15:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:45.501 17:15:05 -- scripts/common.sh@344 -- # : 1 00:05:45.501 17:15:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:45.501 17:15:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.501 17:15:05 -- scripts/common.sh@364 -- # decimal 1 00:05:45.501 17:15:05 -- scripts/common.sh@352 -- # local d=1 00:05:45.501 17:15:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.501 17:15:05 -- scripts/common.sh@354 -- # echo 1 00:05:45.501 17:15:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:45.501 17:15:05 -- scripts/common.sh@365 -- # decimal 2 00:05:45.501 17:15:05 -- scripts/common.sh@352 -- # local d=2 00:05:45.501 17:15:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.501 17:15:05 -- scripts/common.sh@354 -- # echo 2 00:05:45.501 17:15:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:45.501 17:15:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:45.501 17:15:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:45.501 17:15:05 -- scripts/common.sh@367 -- # return 0 00:05:45.501 17:15:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:45.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.501 --rc genhtml_branch_coverage=1 00:05:45.501 --rc genhtml_function_coverage=1 00:05:45.501 --rc genhtml_legend=1 00:05:45.501 --rc geninfo_all_blocks=1 00:05:45.501 --rc geninfo_unexecuted_blocks=1 00:05:45.501 00:05:45.501 ' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:45.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.501 --rc genhtml_branch_coverage=1 00:05:45.501 --rc genhtml_function_coverage=1 00:05:45.501 --rc genhtml_legend=1 00:05:45.501 --rc geninfo_all_blocks=1 00:05:45.501 --rc geninfo_unexecuted_blocks=1 00:05:45.501 00:05:45.501 ' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:45.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.501 --rc genhtml_branch_coverage=1 00:05:45.501 --rc genhtml_function_coverage=1 00:05:45.501 --rc genhtml_legend=1 00:05:45.501 --rc geninfo_all_blocks=1 00:05:45.501 --rc geninfo_unexecuted_blocks=1 00:05:45.501 00:05:45.501 ' 00:05:45.501 17:15:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:45.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.501 --rc genhtml_branch_coverage=1 00:05:45.501 --rc genhtml_function_coverage=1 00:05:45.501 --rc genhtml_legend=1 00:05:45.501 --rc geninfo_all_blocks=1 00:05:45.501 --rc geninfo_unexecuted_blocks=1 00:05:45.501 00:05:45.501 ' 00:05:45.502 17:15:05 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.502 17:15:05 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2525451 00:05:45.502 17:15:05 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2525451 00:05:45.502 17:15:05 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.502 17:15:05 -- common/autotest_common.sh@829 -- # '[' -z 2525451 ']' 00:05:45.502 17:15:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.502 17:15:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.502 17:15:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.502 17:15:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.502 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.760 [2024-11-09 17:15:05.282233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.760 [2024-11-09 17:15:05.282283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525451 ] 00:05:45.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.760 [2024-11-09 17:15:05.348826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.760 [2024-11-09 17:15:05.422399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.760 [2024-11-09 17:15:05.422519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.328 17:15:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.328 17:15:06 -- common/autotest_common.sh@862 -- # return 0 00:05:46.328 17:15:06 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.586 17:15:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2525451 00:05:46.586 17:15:06 -- common/autotest_common.sh@936 -- # '[' -z 2525451 ']' 00:05:46.586 17:15:06 -- common/autotest_common.sh@940 -- # kill -0 2525451 00:05:46.586 17:15:06 -- common/autotest_common.sh@941 -- # uname 00:05:46.586 17:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.586 17:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2525451 00:05:46.586 17:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.586 17:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.849 17:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2525451' 00:05:46.849 killing process with pid 2525451 00:05:46.849 17:15:06 -- common/autotest_common.sh@955 -- # kill 2525451 00:05:46.849 17:15:06 -- common/autotest_common.sh@960 -- # wait 2525451 00:05:47.108 00:05:47.108 real 0m1.640s 00:05:47.108 user 0m1.718s 00:05:47.108 sys 0m0.496s 00:05:47.108 17:15:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.108 17:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.108 ************************************ 00:05:47.108 END TEST alias_rpc 00:05:47.108 ************************************ 00:05:47.108 17:15:06 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:47.108 17:15:06 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.108 17:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.108 17:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.108 17:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.108 ************************************ 00:05:47.108 START TEST spdkcli_tcp 00:05:47.108 ************************************ 00:05:47.108 17:15:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.108 * Looking for test storage... 00:05:47.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:47.108 17:15:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.108 17:15:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.108 17:15:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.367 17:15:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.367 17:15:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.367 17:15:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.367 17:15:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.367 17:15:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.367 17:15:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.367 17:15:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.367 17:15:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.367 17:15:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.367 17:15:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.367 17:15:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.367 17:15:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.367 17:15:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.367 17:15:06 -- scripts/common.sh@344 -- # : 1 00:05:47.367 17:15:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.367 17:15:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.367 17:15:06 -- scripts/common.sh@364 -- # decimal 1 00:05:47.367 17:15:06 -- scripts/common.sh@352 -- # local d=1 00:05:47.367 17:15:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.367 17:15:06 -- scripts/common.sh@354 -- # echo 1 00:05:47.367 17:15:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.367 17:15:06 -- scripts/common.sh@365 -- # decimal 2 00:05:47.367 17:15:06 -- scripts/common.sh@352 -- # local d=2 00:05:47.367 17:15:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.367 17:15:06 -- scripts/common.sh@354 -- # echo 2 00:05:47.367 17:15:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.367 17:15:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.367 17:15:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.367 17:15:06 -- scripts/common.sh@367 -- # return 0 00:05:47.367 17:15:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.367 17:15:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.367 --rc genhtml_branch_coverage=1 00:05:47.367 --rc genhtml_function_coverage=1 00:05:47.367 --rc genhtml_legend=1 00:05:47.367 --rc geninfo_all_blocks=1 00:05:47.367 --rc geninfo_unexecuted_blocks=1 00:05:47.367 00:05:47.367 ' 00:05:47.367 17:15:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.367 --rc genhtml_branch_coverage=1 00:05:47.367 --rc genhtml_function_coverage=1 00:05:47.367 --rc genhtml_legend=1 00:05:47.367 --rc geninfo_all_blocks=1 00:05:47.367 --rc geninfo_unexecuted_blocks=1 00:05:47.367 00:05:47.367 ' 00:05:47.367 17:15:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.367 --rc genhtml_branch_coverage=1 00:05:47.367 --rc genhtml_function_coverage=1 00:05:47.367 --rc genhtml_legend=1 00:05:47.367 --rc geninfo_all_blocks=1 00:05:47.367 --rc geninfo_unexecuted_blocks=1 00:05:47.367 00:05:47.367 ' 00:05:47.367 17:15:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.367 --rc genhtml_branch_coverage=1 00:05:47.367 --rc genhtml_function_coverage=1 00:05:47.367 --rc genhtml_legend=1 00:05:47.367 --rc geninfo_all_blocks=1 00:05:47.367 --rc geninfo_unexecuted_blocks=1 00:05:47.367 00:05:47.367 ' 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:47.367 17:15:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:47.367 17:15:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.367 17:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.367 17:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2525846 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 2525846 00:05:47.367 17:15:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.367 17:15:06 -- common/autotest_common.sh@829 -- # '[' -z 2525846 ']' 00:05:47.367 17:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.367 17:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.367 17:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.367 17:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.367 17:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.367 [2024-11-09 17:15:06.977117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.367 [2024-11-09 17:15:06.977174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525846 ] 00:05:47.367 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.367 [2024-11-09 17:15:07.048048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.367 [2024-11-09 17:15:07.120450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.367 [2024-11-09 17:15:07.120601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.367 [2024-11-09 17:15:07.120607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.303 17:15:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.303 17:15:07 -- common/autotest_common.sh@862 -- # return 0 00:05:48.303 17:15:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.303 17:15:07 -- spdkcli/tcp.sh@31 -- # socat_pid=2526115 00:05:48.303 17:15:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.303 [ 00:05:48.303 "bdev_malloc_delete", 00:05:48.303 "bdev_malloc_create", 00:05:48.303 "bdev_null_resize", 00:05:48.303 "bdev_null_delete", 00:05:48.303 "bdev_null_create", 00:05:48.303 "bdev_nvme_cuse_unregister", 00:05:48.303 "bdev_nvme_cuse_register", 00:05:48.303 "bdev_opal_new_user", 00:05:48.303 "bdev_opal_set_lock_state", 00:05:48.303 "bdev_opal_delete", 00:05:48.303 "bdev_opal_get_info", 00:05:48.303 "bdev_opal_create", 00:05:48.303 "bdev_nvme_opal_revert", 00:05:48.303 "bdev_nvme_opal_init", 00:05:48.303 "bdev_nvme_send_cmd", 00:05:48.303 "bdev_nvme_get_path_iostat", 00:05:48.303 "bdev_nvme_get_mdns_discovery_info", 00:05:48.303 "bdev_nvme_stop_mdns_discovery", 00:05:48.303 "bdev_nvme_start_mdns_discovery", 00:05:48.303 "bdev_nvme_set_multipath_policy", 00:05:48.303 "bdev_nvme_set_preferred_path", 00:05:48.303 "bdev_nvme_get_io_paths", 00:05:48.303 "bdev_nvme_remove_error_injection", 00:05:48.303 "bdev_nvme_add_error_injection", 00:05:48.303 "bdev_nvme_get_discovery_info", 00:05:48.303 "bdev_nvme_stop_discovery", 00:05:48.304 "bdev_nvme_start_discovery", 00:05:48.304 "bdev_nvme_get_controller_health_info", 00:05:48.304 "bdev_nvme_disable_controller", 00:05:48.304 "bdev_nvme_enable_controller", 00:05:48.304 "bdev_nvme_reset_controller", 00:05:48.304 "bdev_nvme_get_transport_statistics", 00:05:48.304 "bdev_nvme_apply_firmware", 00:05:48.304 "bdev_nvme_detach_controller", 00:05:48.304 "bdev_nvme_get_controllers", 00:05:48.304 "bdev_nvme_attach_controller", 00:05:48.304 "bdev_nvme_set_hotplug", 00:05:48.304 "bdev_nvme_set_options", 00:05:48.304 "bdev_passthru_delete", 00:05:48.304 "bdev_passthru_create", 00:05:48.304 "bdev_lvol_grow_lvstore", 00:05:48.304 "bdev_lvol_get_lvols", 00:05:48.304 "bdev_lvol_get_lvstores", 00:05:48.304 "bdev_lvol_delete", 00:05:48.304 "bdev_lvol_set_read_only", 00:05:48.304 "bdev_lvol_resize", 00:05:48.304 "bdev_lvol_decouple_parent", 00:05:48.304 "bdev_lvol_inflate", 00:05:48.304 "bdev_lvol_rename", 00:05:48.304 "bdev_lvol_clone_bdev", 00:05:48.304 "bdev_lvol_clone", 00:05:48.304 "bdev_lvol_snapshot", 00:05:48.304 "bdev_lvol_create", 00:05:48.304 "bdev_lvol_delete_lvstore", 00:05:48.304 "bdev_lvol_rename_lvstore", 00:05:48.304 "bdev_lvol_create_lvstore", 00:05:48.304 "bdev_raid_set_options", 00:05:48.304 "bdev_raid_remove_base_bdev", 00:05:48.304 "bdev_raid_add_base_bdev", 00:05:48.304 "bdev_raid_delete", 00:05:48.304 "bdev_raid_create", 00:05:48.304 "bdev_raid_get_bdevs", 00:05:48.304 "bdev_error_inject_error", 00:05:48.304 "bdev_error_delete", 00:05:48.304 "bdev_error_create", 00:05:48.304 "bdev_split_delete", 00:05:48.304 "bdev_split_create", 00:05:48.304 "bdev_delay_delete", 00:05:48.304 "bdev_delay_create", 00:05:48.304 "bdev_delay_update_latency", 00:05:48.304 "bdev_zone_block_delete", 00:05:48.304 "bdev_zone_block_create", 00:05:48.304 "blobfs_create", 00:05:48.304 "blobfs_detect", 00:05:48.304 "blobfs_set_cache_size", 00:05:48.304 "bdev_aio_delete", 00:05:48.304 "bdev_aio_rescan", 00:05:48.304 "bdev_aio_create", 00:05:48.304 "bdev_ftl_set_property", 00:05:48.304 "bdev_ftl_get_properties", 00:05:48.304 "bdev_ftl_get_stats", 00:05:48.304 "bdev_ftl_unmap", 00:05:48.304 "bdev_ftl_unload", 00:05:48.304 "bdev_ftl_delete", 00:05:48.304 "bdev_ftl_load", 00:05:48.304 "bdev_ftl_create", 00:05:48.304 "bdev_virtio_attach_controller", 00:05:48.304 "bdev_virtio_scsi_get_devices", 00:05:48.304 "bdev_virtio_detach_controller", 00:05:48.304 "bdev_virtio_blk_set_hotplug", 00:05:48.304 "bdev_iscsi_delete", 00:05:48.304 "bdev_iscsi_create", 00:05:48.304 "bdev_iscsi_set_options", 00:05:48.304 "accel_error_inject_error", 00:05:48.304 "ioat_scan_accel_module", 00:05:48.304 "dsa_scan_accel_module", 00:05:48.304 "iaa_scan_accel_module", 00:05:48.304 "iscsi_set_options", 00:05:48.304 "iscsi_get_auth_groups", 00:05:48.304 "iscsi_auth_group_remove_secret", 00:05:48.304 "iscsi_auth_group_add_secret", 00:05:48.304 "iscsi_delete_auth_group", 00:05:48.304 "iscsi_create_auth_group", 00:05:48.304 "iscsi_set_discovery_auth", 00:05:48.304 "iscsi_get_options", 00:05:48.304 "iscsi_target_node_request_logout", 00:05:48.304 "iscsi_target_node_set_redirect", 00:05:48.304 "iscsi_target_node_set_auth", 00:05:48.304 "iscsi_target_node_add_lun", 00:05:48.304 "iscsi_get_connections", 00:05:48.304 "iscsi_portal_group_set_auth", 00:05:48.304 "iscsi_start_portal_group", 00:05:48.304 "iscsi_delete_portal_group", 00:05:48.304 "iscsi_create_portal_group", 00:05:48.304 "iscsi_get_portal_groups", 00:05:48.304 "iscsi_delete_target_node", 00:05:48.304 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.304 "iscsi_target_node_add_pg_ig_maps", 00:05:48.304 "iscsi_create_target_node", 00:05:48.304 "iscsi_get_target_nodes", 00:05:48.304 "iscsi_delete_initiator_group", 00:05:48.304 "iscsi_initiator_group_remove_initiators", 00:05:48.304 "iscsi_initiator_group_add_initiators", 00:05:48.304 "iscsi_create_initiator_group", 00:05:48.304 "iscsi_get_initiator_groups", 00:05:48.304 "nvmf_set_crdt", 00:05:48.304 "nvmf_set_config", 00:05:48.304 "nvmf_set_max_subsystems", 00:05:48.304 "nvmf_subsystem_get_listeners", 00:05:48.304 "nvmf_subsystem_get_qpairs", 00:05:48.304 "nvmf_subsystem_get_controllers", 00:05:48.304 "nvmf_get_stats", 00:05:48.304 "nvmf_get_transports", 00:05:48.304 "nvmf_create_transport", 00:05:48.304 "nvmf_get_targets", 00:05:48.304 "nvmf_delete_target", 00:05:48.304 "nvmf_create_target", 00:05:48.304 "nvmf_subsystem_allow_any_host", 00:05:48.304 "nvmf_subsystem_remove_host", 00:05:48.304 "nvmf_subsystem_add_host", 00:05:48.304 "nvmf_subsystem_remove_ns", 00:05:48.304 "nvmf_subsystem_add_ns", 00:05:48.304 "nvmf_subsystem_listener_set_ana_state", 00:05:48.304 "nvmf_discovery_get_referrals", 00:05:48.304 "nvmf_discovery_remove_referral", 00:05:48.304 "nvmf_discovery_add_referral", 00:05:48.304 "nvmf_subsystem_remove_listener", 00:05:48.304 "nvmf_subsystem_add_listener", 00:05:48.304 "nvmf_delete_subsystem", 00:05:48.304 "nvmf_create_subsystem", 00:05:48.304 "nvmf_get_subsystems", 00:05:48.304 "env_dpdk_get_mem_stats", 00:05:48.304 "nbd_get_disks", 00:05:48.304 "nbd_stop_disk", 00:05:48.304 "nbd_start_disk", 00:05:48.304 "ublk_recover_disk", 00:05:48.304 "ublk_get_disks", 00:05:48.304 "ublk_stop_disk", 00:05:48.304 "ublk_start_disk", 00:05:48.304 "ublk_destroy_target", 00:05:48.304 "ublk_create_target", 00:05:48.304 "virtio_blk_create_transport", 00:05:48.304 "virtio_blk_get_transports", 00:05:48.304 "vhost_controller_set_coalescing", 00:05:48.304 "vhost_get_controllers", 00:05:48.304 "vhost_delete_controller", 00:05:48.304 "vhost_create_blk_controller", 00:05:48.304 "vhost_scsi_controller_remove_target", 00:05:48.304 "vhost_scsi_controller_add_target", 00:05:48.304 "vhost_start_scsi_controller", 00:05:48.304 "vhost_create_scsi_controller", 00:05:48.304 "thread_set_cpumask", 00:05:48.304 "framework_get_scheduler", 00:05:48.304 "framework_set_scheduler", 00:05:48.304 "framework_get_reactors", 00:05:48.304 "thread_get_io_channels", 00:05:48.304 "thread_get_pollers", 00:05:48.304 "thread_get_stats", 00:05:48.304 "framework_monitor_context_switch", 00:05:48.304 "spdk_kill_instance", 00:05:48.304 "log_enable_timestamps", 00:05:48.304 "log_get_flags", 00:05:48.304 "log_clear_flag", 00:05:48.304 "log_set_flag", 00:05:48.304 "log_get_level", 00:05:48.304 "log_set_level", 00:05:48.304 "log_get_print_level", 00:05:48.304 "log_set_print_level", 00:05:48.304 "framework_enable_cpumask_locks", 00:05:48.304 "framework_disable_cpumask_locks", 00:05:48.304 "framework_wait_init", 00:05:48.304 "framework_start_init", 00:05:48.304 "scsi_get_devices", 00:05:48.304 "bdev_get_histogram", 00:05:48.304 "bdev_enable_histogram", 00:05:48.304 "bdev_set_qos_limit", 00:05:48.304 "bdev_set_qd_sampling_period", 00:05:48.304 "bdev_get_bdevs", 00:05:48.304 "bdev_reset_iostat", 00:05:48.304 "bdev_get_iostat", 00:05:48.304 "bdev_examine", 00:05:48.304 "bdev_wait_for_examine", 00:05:48.304 "bdev_set_options", 00:05:48.304 "notify_get_notifications", 00:05:48.304 "notify_get_types", 00:05:48.304 "accel_get_stats", 00:05:48.304 "accel_set_options", 00:05:48.304 "accel_set_driver", 00:05:48.304 "accel_crypto_key_destroy", 00:05:48.304 "accel_crypto_keys_get", 00:05:48.304 "accel_crypto_key_create", 00:05:48.304 "accel_assign_opc", 00:05:48.304 "accel_get_module_info", 00:05:48.304 "accel_get_opc_assignments", 00:05:48.304 "vmd_rescan", 00:05:48.304 "vmd_remove_device", 00:05:48.304 "vmd_enable", 00:05:48.304 "sock_set_default_impl", 00:05:48.304 "sock_impl_set_options", 00:05:48.304 "sock_impl_get_options", 00:05:48.304 "iobuf_get_stats", 00:05:48.304 "iobuf_set_options", 00:05:48.304 "framework_get_pci_devices", 00:05:48.304 "framework_get_config", 00:05:48.304 "framework_get_subsystems", 00:05:48.304 "trace_get_info", 00:05:48.304 "trace_get_tpoint_group_mask", 00:05:48.304 "trace_disable_tpoint_group", 00:05:48.304 "trace_enable_tpoint_group", 00:05:48.304 "trace_clear_tpoint_mask", 00:05:48.304 "trace_set_tpoint_mask", 00:05:48.304 "spdk_get_version", 00:05:48.304 "rpc_get_methods" 00:05:48.304 ] 00:05:48.304 17:15:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.304 17:15:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.304 17:15:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.304 17:15:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:48.304 17:15:07 -- spdkcli/tcp.sh@38 -- # killprocess 2525846 00:05:48.304 17:15:07 -- common/autotest_common.sh@936 -- # '[' -z 2525846 ']' 00:05:48.304 17:15:07 -- common/autotest_common.sh@940 -- # kill -0 2525846 00:05:48.304 17:15:07 -- common/autotest_common.sh@941 -- # uname 00:05:48.304 17:15:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.304 17:15:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2525846 00:05:48.304 17:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.304 17:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.304 17:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2525846' 00:05:48.304 killing process with pid 2525846 00:05:48.304 17:15:08 -- common/autotest_common.sh@955 -- # kill 2525846 00:05:48.304 17:15:08 -- common/autotest_common.sh@960 -- # wait 2525846 00:05:48.872 00:05:48.872 real 0m1.649s 00:05:48.872 user 0m2.908s 00:05:48.872 sys 0m0.528s 00:05:48.872 17:15:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.872 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 ************************************ 00:05:48.872 END TEST spdkcli_tcp 00:05:48.872 ************************************ 00:05:48.872 17:15:08 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.872 17:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.872 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 ************************************ 00:05:48.872 START TEST dpdk_mem_utility 00:05:48.872 ************************************ 00:05:48.872 17:15:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.872 * Looking for test storage... 00:05:48.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.872 17:15:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:48.872 17:15:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:48.872 17:15:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:48.872 17:15:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:48.872 17:15:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:48.872 17:15:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:48.872 17:15:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:48.872 17:15:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:48.872 17:15:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.872 17:15:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:48.872 17:15:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:48.872 17:15:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:48.872 17:15:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:48.872 17:15:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:48.872 17:15:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:48.872 17:15:08 -- scripts/common.sh@344 -- # : 1 00:05:48.872 17:15:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:48.872 17:15:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.872 17:15:08 -- scripts/common.sh@364 -- # decimal 1 00:05:48.872 17:15:08 -- scripts/common.sh@352 -- # local d=1 00:05:48.872 17:15:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.872 17:15:08 -- scripts/common.sh@354 -- # echo 1 00:05:48.872 17:15:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:48.872 17:15:08 -- scripts/common.sh@365 -- # decimal 2 00:05:48.872 17:15:08 -- scripts/common.sh@352 -- # local d=2 00:05:48.872 17:15:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.872 17:15:08 -- scripts/common.sh@354 -- # echo 2 00:05:48.872 17:15:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:48.872 17:15:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:48.872 17:15:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:48.872 17:15:08 -- scripts/common.sh@367 -- # return 0 00:05:48.872 17:15:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.872 --rc genhtml_branch_coverage=1 00:05:48.872 --rc genhtml_function_coverage=1 00:05:48.872 --rc genhtml_legend=1 00:05:48.872 --rc geninfo_all_blocks=1 00:05:48.872 --rc geninfo_unexecuted_blocks=1 00:05:48.872 00:05:48.872 ' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.872 --rc genhtml_branch_coverage=1 00:05:48.872 --rc genhtml_function_coverage=1 00:05:48.872 --rc genhtml_legend=1 00:05:48.872 --rc geninfo_all_blocks=1 00:05:48.872 --rc geninfo_unexecuted_blocks=1 00:05:48.872 00:05:48.872 ' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.872 --rc genhtml_branch_coverage=1 00:05:48.872 --rc genhtml_function_coverage=1 00:05:48.872 --rc genhtml_legend=1 00:05:48.872 --rc geninfo_all_blocks=1 00:05:48.872 --rc geninfo_unexecuted_blocks=1 00:05:48.872 00:05:48.872 ' 00:05:48.872 17:15:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.872 --rc genhtml_branch_coverage=1 00:05:48.872 --rc genhtml_function_coverage=1 00:05:48.872 --rc genhtml_legend=1 00:05:48.872 --rc geninfo_all_blocks=1 00:05:48.872 --rc geninfo_unexecuted_blocks=1 00:05:48.872 00:05:48.872 ' 00:05:48.872 17:15:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.872 17:15:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2526227 00:05:48.872 17:15:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2526227 00:05:48.872 17:15:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.872 17:15:08 -- common/autotest_common.sh@829 -- # '[' -z 2526227 ']' 00:05:48.872 17:15:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.872 17:15:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.872 17:15:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.873 17:15:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.873 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.132 [2024-11-09 17:15:08.669719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.132 [2024-11-09 17:15:08.669770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526227 ] 00:05:49.132 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.132 [2024-11-09 17:15:08.736581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.132 [2024-11-09 17:15:08.809962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.132 [2024-11-09 17:15:08.810074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.069 17:15:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.069 17:15:09 -- common/autotest_common.sh@862 -- # return 0 00:05:50.069 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.069 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.069 17:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.069 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:50.069 { 00:05:50.069 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.069 } 00:05:50.069 17:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.069 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.069 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:50.069 1 heaps totaling size 814.000000 MiB 00:05:50.069 size: 814.000000 MiB heap id: 0 00:05:50.069 end heaps---------- 00:05:50.069 8 mempools totaling size 598.116089 MiB 00:05:50.069 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.069 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.069 size: 84.521057 MiB name: bdev_io_2526227 00:05:50.069 size: 51.011292 MiB name: evtpool_2526227 00:05:50.069 size: 50.003479 MiB name: msgpool_2526227 00:05:50.069 size: 21.763794 MiB name: PDU_Pool 00:05:50.069 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.069 size: 0.026123 MiB name: Session_Pool 00:05:50.069 end mempools------- 00:05:50.069 6 memzones totaling size 4.142822 MiB 00:05:50.069 size: 1.000366 MiB name: RG_ring_0_2526227 00:05:50.069 size: 1.000366 MiB name: RG_ring_1_2526227 00:05:50.069 size: 1.000366 MiB name: RG_ring_4_2526227 00:05:50.069 size: 1.000366 MiB name: RG_ring_5_2526227 00:05:50.069 size: 0.125366 MiB name: RG_ring_2_2526227 00:05:50.069 size: 0.015991 MiB name: RG_ring_3_2526227 00:05:50.069 end memzones------- 00:05:50.069 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.069 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:50.069 list of free elements. size: 12.519348 MiB 00:05:50.069 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:50.069 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:50.069 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:50.069 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:50.069 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:50.069 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:50.070 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:50.070 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:50.070 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:50.070 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:50.070 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:50.070 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:50.070 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:50.070 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:50.070 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:50.070 list of standard malloc elements. size: 199.218079 MiB 00:05:50.070 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:50.070 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:50.070 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:50.070 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:50.070 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:50.070 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:50.070 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:50.070 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:50.070 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:50.070 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:50.070 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:50.070 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:50.070 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:50.070 list of memzone associated elements. size: 602.262573 MiB 00:05:50.070 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:50.070 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.070 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:50.070 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.070 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:50.070 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2526227_0 00:05:50.070 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:50.070 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2526227_0 00:05:50.070 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:50.070 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2526227_0 00:05:50.070 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:50.070 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.070 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:50.070 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.070 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:50.070 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2526227 00:05:50.070 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:50.070 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2526227 00:05:50.070 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:50.070 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2526227 00:05:50.070 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:50.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.070 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:50.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.070 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:50.070 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.070 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:50.070 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.070 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:50.070 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2526227 00:05:50.070 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:50.070 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2526227 00:05:50.070 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:50.070 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2526227 00:05:50.070 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:50.070 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2526227 00:05:50.070 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:50.070 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2526227 00:05:50.070 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:50.070 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.070 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:50.070 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.070 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:50.070 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.070 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:50.070 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2526227 00:05:50.070 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:50.070 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.070 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:50.070 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.070 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:50.070 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2526227 00:05:50.070 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:50.070 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.070 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:50.070 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2526227 00:05:50.070 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:50.070 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2526227 00:05:50.070 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:50.070 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.070 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.070 17:15:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2526227 00:05:50.070 17:15:09 -- common/autotest_common.sh@936 -- # '[' -z 2526227 ']' 00:05:50.070 17:15:09 -- common/autotest_common.sh@940 -- # kill -0 2526227 00:05:50.070 17:15:09 -- common/autotest_common.sh@941 -- # uname 00:05:50.070 17:15:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.070 17:15:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2526227 00:05:50.070 17:15:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.070 17:15:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.070 17:15:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2526227' 00:05:50.070 killing process with pid 2526227 00:05:50.070 17:15:09 -- common/autotest_common.sh@955 -- # kill 2526227 00:05:50.070 17:15:09 -- common/autotest_common.sh@960 -- # wait 2526227 00:05:50.329 00:05:50.329 real 0m1.530s 00:05:50.329 user 0m1.573s 00:05:50.329 sys 0m0.447s 00:05:50.329 17:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.329 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:50.329 ************************************ 00:05:50.329 END TEST dpdk_mem_utility 00:05:50.329 ************************************ 00:05:50.329 17:15:10 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:50.329 17:15:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.329 17:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.329 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:05:50.329 ************************************ 00:05:50.329 START TEST event 00:05:50.329 ************************************ 00:05:50.329 17:15:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:50.588 * Looking for test storage... 00:05:50.588 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:50.588 17:15:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.588 17:15:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.588 17:15:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.588 17:15:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.588 17:15:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.588 17:15:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.588 17:15:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.588 17:15:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.588 17:15:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.588 17:15:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.588 17:15:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.588 17:15:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.588 17:15:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.588 17:15:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.588 17:15:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.588 17:15:10 -- scripts/common.sh@344 -- # : 1 00:05:50.588 17:15:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.588 17:15:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.588 17:15:10 -- scripts/common.sh@364 -- # decimal 1 00:05:50.588 17:15:10 -- scripts/common.sh@352 -- # local d=1 00:05:50.588 17:15:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.588 17:15:10 -- scripts/common.sh@354 -- # echo 1 00:05:50.588 17:15:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.588 17:15:10 -- scripts/common.sh@365 -- # decimal 2 00:05:50.588 17:15:10 -- scripts/common.sh@352 -- # local d=2 00:05:50.588 17:15:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.588 17:15:10 -- scripts/common.sh@354 -- # echo 2 00:05:50.588 17:15:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.588 17:15:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.588 17:15:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.588 17:15:10 -- scripts/common.sh@367 -- # return 0 00:05:50.588 17:15:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.588 --rc genhtml_branch_coverage=1 00:05:50.588 --rc genhtml_function_coverage=1 00:05:50.588 --rc genhtml_legend=1 00:05:50.588 --rc geninfo_all_blocks=1 00:05:50.588 --rc geninfo_unexecuted_blocks=1 00:05:50.588 00:05:50.588 ' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.588 --rc genhtml_branch_coverage=1 00:05:50.588 --rc genhtml_function_coverage=1 00:05:50.588 --rc genhtml_legend=1 00:05:50.588 --rc geninfo_all_blocks=1 00:05:50.588 --rc geninfo_unexecuted_blocks=1 00:05:50.588 00:05:50.588 ' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.588 --rc genhtml_branch_coverage=1 00:05:50.588 --rc genhtml_function_coverage=1 00:05:50.588 --rc genhtml_legend=1 00:05:50.588 --rc geninfo_all_blocks=1 00:05:50.588 --rc geninfo_unexecuted_blocks=1 00:05:50.588 00:05:50.588 ' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.588 --rc genhtml_branch_coverage=1 00:05:50.588 --rc genhtml_function_coverage=1 00:05:50.588 --rc genhtml_legend=1 00:05:50.588 --rc geninfo_all_blocks=1 00:05:50.588 --rc geninfo_unexecuted_blocks=1 00:05:50.588 00:05:50.588 ' 00:05:50.588 17:15:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:50.588 17:15:10 -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.588 17:15:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.588 17:15:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:50.588 17:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.588 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:05:50.588 ************************************ 00:05:50.588 START TEST event_perf 00:05:50.588 ************************************ 00:05:50.588 17:15:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.588 Running I/O for 1 seconds...[2024-11-09 17:15:10.230411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.588 [2024-11-09 17:15:10.230507] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526585 ] 00:05:50.588 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.588 [2024-11-09 17:15:10.303069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.847 [2024-11-09 17:15:10.374739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.847 [2024-11-09 17:15:10.374836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.847 [2024-11-09 17:15:10.374897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.847 [2024-11-09 17:15:10.374899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.782 Running I/O for 1 seconds... 00:05:51.782 lcore 0: 216220 00:05:51.782 lcore 1: 216220 00:05:51.782 lcore 2: 216220 00:05:51.782 lcore 3: 216221 00:05:51.782 done. 00:05:51.782 00:05:51.782 real 0m1.249s 00:05:51.782 user 0m4.156s 00:05:51.782 sys 0m0.090s 00:05:51.782 17:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.782 17:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:51.782 ************************************ 00:05:51.782 END TEST event_perf 00:05:51.782 ************************************ 00:05:51.782 17:15:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.782 17:15:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:51.782 17:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.782 17:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:51.782 ************************************ 00:05:51.782 START TEST event_reactor 00:05:51.782 ************************************ 00:05:51.782 17:15:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.782 [2024-11-09 17:15:11.527533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.782 [2024-11-09 17:15:11.527621] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526813 ] 00:05:52.040 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.040 [2024-11-09 17:15:11.598295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.040 [2024-11-09 17:15:11.663782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.975 test_start 00:05:52.975 oneshot 00:05:52.975 tick 100 00:05:52.975 tick 100 00:05:52.975 tick 250 00:05:52.975 tick 100 00:05:52.975 tick 100 00:05:52.975 tick 100 00:05:52.975 tick 250 00:05:52.975 tick 500 00:05:52.975 tick 100 00:05:52.975 tick 100 00:05:52.975 tick 250 00:05:52.975 tick 100 00:05:52.975 tick 100 00:05:52.975 test_end 00:05:52.975 00:05:52.975 real 0m1.238s 00:05:52.975 user 0m1.144s 00:05:52.975 sys 0m0.090s 00:05:52.975 17:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.975 17:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.975 ************************************ 00:05:52.975 END TEST event_reactor 00:05:52.975 ************************************ 00:05:53.233 17:15:12 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.233 17:15:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:53.233 17:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.233 17:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:53.233 ************************************ 00:05:53.233 START TEST event_reactor_perf 00:05:53.233 ************************************ 00:05:53.233 17:15:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.233 [2024-11-09 17:15:12.817104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.233 [2024-11-09 17:15:12.817192] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527102 ] 00:05:53.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.233 [2024-11-09 17:15:12.888403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.233 [2024-11-09 17:15:12.952335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.608 test_start 00:05:54.608 test_end 00:05:54.608 Performance: 514389 events per second 00:05:54.608 00:05:54.608 real 0m1.244s 00:05:54.608 user 0m1.154s 00:05:54.608 sys 0m0.086s 00:05:54.608 17:15:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.608 17:15:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.608 ************************************ 00:05:54.608 END TEST event_reactor_perf 00:05:54.608 ************************************ 00:05:54.608 17:15:14 -- event/event.sh@49 -- # uname -s 00:05:54.608 17:15:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.608 17:15:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.608 17:15:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.608 17:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.608 17:15:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.608 ************************************ 00:05:54.608 START TEST event_scheduler 00:05:54.608 ************************************ 00:05:54.608 17:15:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.608 * Looking for test storage... 00:05:54.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:54.608 17:15:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.608 17:15:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.608 17:15:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.608 17:15:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.608 17:15:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.608 17:15:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.608 17:15:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.608 17:15:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.608 17:15:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.608 17:15:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.608 17:15:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.608 17:15:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.608 17:15:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.608 17:15:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.608 17:15:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.608 17:15:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.608 17:15:14 -- scripts/common.sh@344 -- # : 1 00:05:54.608 17:15:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.608 17:15:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.608 17:15:14 -- scripts/common.sh@364 -- # decimal 1 00:05:54.608 17:15:14 -- scripts/common.sh@352 -- # local d=1 00:05:54.608 17:15:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.608 17:15:14 -- scripts/common.sh@354 -- # echo 1 00:05:54.608 17:15:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.608 17:15:14 -- scripts/common.sh@365 -- # decimal 2 00:05:54.608 17:15:14 -- scripts/common.sh@352 -- # local d=2 00:05:54.608 17:15:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.608 17:15:14 -- scripts/common.sh@354 -- # echo 2 00:05:54.608 17:15:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.608 17:15:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.608 17:15:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.608 17:15:14 -- scripts/common.sh@367 -- # return 0 00:05:54.608 17:15:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.609 17:15:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.609 --rc genhtml_branch_coverage=1 00:05:54.609 --rc genhtml_function_coverage=1 00:05:54.609 --rc genhtml_legend=1 00:05:54.609 --rc geninfo_all_blocks=1 00:05:54.609 --rc geninfo_unexecuted_blocks=1 00:05:54.609 00:05:54.609 ' 00:05:54.609 17:15:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.609 --rc genhtml_branch_coverage=1 00:05:54.609 --rc genhtml_function_coverage=1 00:05:54.609 --rc genhtml_legend=1 00:05:54.609 --rc geninfo_all_blocks=1 00:05:54.609 --rc geninfo_unexecuted_blocks=1 00:05:54.609 00:05:54.609 ' 00:05:54.609 17:15:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.609 --rc genhtml_branch_coverage=1 00:05:54.609 --rc genhtml_function_coverage=1 00:05:54.609 --rc genhtml_legend=1 00:05:54.609 --rc geninfo_all_blocks=1 00:05:54.609 --rc geninfo_unexecuted_blocks=1 00:05:54.609 00:05:54.609 ' 00:05:54.609 17:15:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.609 --rc genhtml_branch_coverage=1 00:05:54.609 --rc genhtml_function_coverage=1 00:05:54.609 --rc genhtml_legend=1 00:05:54.609 --rc geninfo_all_blocks=1 00:05:54.609 --rc geninfo_unexecuted_blocks=1 00:05:54.609 00:05:54.609 ' 00:05:54.609 17:15:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.609 17:15:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2527418 00:05:54.609 17:15:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.609 17:15:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.609 17:15:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 2527418 00:05:54.609 17:15:14 -- common/autotest_common.sh@829 -- # '[' -z 2527418 ']' 00:05:54.609 17:15:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.609 17:15:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.609 17:15:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.609 17:15:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.609 17:15:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.609 [2024-11-09 17:15:14.322254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.609 [2024-11-09 17:15:14.322302] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527418 ] 00:05:54.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.867 [2024-11-09 17:15:14.386475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.867 [2024-11-09 17:15:14.460986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.867 [2024-11-09 17:15:14.461067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.867 [2024-11-09 17:15:14.461151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.867 [2024-11-09 17:15:14.461153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.436 17:15:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.436 17:15:15 -- common/autotest_common.sh@862 -- # return 0 00:05:55.436 17:15:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.436 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.436 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.436 POWER: Env isn't set yet! 00:05:55.436 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:55.436 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.436 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.436 POWER: Attempting to initialise PSTAT power management... 00:05:55.436 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:55.436 POWER: Initialized successfully for lcore 0 power management 00:05:55.436 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:55.436 POWER: Initialized successfully for lcore 1 power management 00:05:55.436 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:55.436 POWER: Initialized successfully for lcore 2 power management 00:05:55.436 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:55.436 POWER: Initialized successfully for lcore 3 power management 00:05:55.436 [2024-11-09 17:15:15.197637] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.436 [2024-11-09 17:15:15.197651] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.436 [2024-11-09 17:15:15.197661] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.436 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.436 17:15:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.436 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.436 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 [2024-11-09 17:15:15.266412] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.695 17:15:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.695 17:15:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 ************************************ 00:05:55.695 START TEST scheduler_create_thread 00:05:55.695 ************************************ 00:05:55.695 17:15:15 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 2 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 3 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 4 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 5 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 6 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 7 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 8 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 9 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 10 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.695 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.695 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.695 17:15:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.695 17:15:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.696 17:15:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.696 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.632 17:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.632 17:15:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.632 17:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.632 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.009 17:15:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.009 17:15:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.009 17:15:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.009 17:15:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.009 17:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.946 17:15:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.946 00:05:58.946 real 0m3.382s 00:05:58.946 user 0m0.020s 00:05:58.946 sys 0m0.011s 00:05:58.946 17:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.946 17:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.946 ************************************ 00:05:58.946 END TEST scheduler_create_thread 00:05:58.946 ************************************ 00:05:58.946 17:15:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.946 17:15:18 -- scheduler/scheduler.sh@46 -- # killprocess 2527418 00:05:58.946 17:15:18 -- common/autotest_common.sh@936 -- # '[' -z 2527418 ']' 00:05:58.946 17:15:18 -- common/autotest_common.sh@940 -- # kill -0 2527418 00:05:58.946 17:15:18 -- common/autotest_common.sh@941 -- # uname 00:05:58.946 17:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.946 17:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2527418 00:05:59.205 17:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:59.205 17:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:59.205 17:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2527418' 00:05:59.205 killing process with pid 2527418 00:05:59.205 17:15:18 -- common/autotest_common.sh@955 -- # kill 2527418 00:05:59.205 17:15:18 -- common/autotest_common.sh@960 -- # wait 2527418 00:05:59.465 [2024-11-09 17:15:19.038250] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.465 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:59.465 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:59.465 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:59.465 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:59.465 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:59.465 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:59.465 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:59.465 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:59.730 00:05:59.730 real 0m5.191s 00:05:59.730 user 0m10.611s 00:05:59.730 sys 0m0.449s 00:05:59.730 17:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.730 17:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.730 ************************************ 00:05:59.730 END TEST event_scheduler 00:05:59.730 ************************************ 00:05:59.730 17:15:19 -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.730 17:15:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.730 17:15:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.730 17:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.730 17:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.730 ************************************ 00:05:59.730 START TEST app_repeat 00:05:59.730 ************************************ 00:05:59.730 17:15:19 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:59.730 17:15:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.730 17:15:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.730 17:15:19 -- event/event.sh@13 -- # local nbd_list 00:05:59.730 17:15:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.730 17:15:19 -- event/event.sh@14 -- # local bdev_list 00:05:59.730 17:15:19 -- event/event.sh@15 -- # local repeat_times=4 00:05:59.730 17:15:19 -- event/event.sh@17 -- # modprobe nbd 00:05:59.730 17:15:19 -- event/event.sh@19 -- # repeat_pid=2528299 00:05:59.730 17:15:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.730 17:15:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.730 17:15:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2528299' 00:05:59.730 Process app_repeat pid: 2528299 00:05:59.730 17:15:19 -- event/event.sh@23 -- # for i in {0..2} 00:05:59.730 17:15:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.730 spdk_app_start Round 0 00:05:59.730 17:15:19 -- event/event.sh@25 -- # waitforlisten 2528299 /var/tmp/spdk-nbd.sock 00:05:59.730 17:15:19 -- common/autotest_common.sh@829 -- # '[' -z 2528299 ']' 00:05:59.730 17:15:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.730 17:15:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.730 17:15:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.730 17:15:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.730 17:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.730 [2024-11-09 17:15:19.380169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.730 [2024-11-09 17:15:19.380249] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528299 ] 00:05:59.730 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.730 [2024-11-09 17:15:19.453411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.070 [2024-11-09 17:15:19.532043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.070 [2024-11-09 17:15:19.532046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.638 17:15:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.638 17:15:20 -- common/autotest_common.sh@862 -- # return 0 00:06:00.638 17:15:20 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.638 Malloc0 00:06:00.897 17:15:20 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.897 Malloc1 00:06:00.897 17:15:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@12 -- # local i 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.897 17:15:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.156 /dev/nbd0 00:06:01.156 17:15:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.156 17:15:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.156 17:15:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.156 17:15:20 -- common/autotest_common.sh@867 -- # local i 00:06:01.156 17:15:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.156 17:15:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.156 17:15:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.156 17:15:20 -- common/autotest_common.sh@871 -- # break 00:06:01.156 17:15:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.156 17:15:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.156 17:15:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.156 1+0 records in 00:06:01.156 1+0 records out 00:06:01.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234924 s, 17.4 MB/s 00:06:01.156 17:15:20 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.156 17:15:20 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.156 17:15:20 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.156 17:15:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.156 17:15:20 -- common/autotest_common.sh@887 -- # return 0 00:06:01.156 17:15:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.156 17:15:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.156 17:15:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.416 /dev/nbd1 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.416 17:15:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.416 17:15:21 -- common/autotest_common.sh@867 -- # local i 00:06:01.416 17:15:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.416 17:15:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.416 17:15:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.416 17:15:21 -- common/autotest_common.sh@871 -- # break 00:06:01.416 17:15:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.416 17:15:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.416 17:15:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.416 1+0 records in 00:06:01.416 1+0 records out 00:06:01.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275251 s, 14.9 MB/s 00:06:01.416 17:15:21 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.416 17:15:21 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.416 17:15:21 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.416 17:15:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.416 17:15:21 -- common/autotest_common.sh@887 -- # return 0 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.416 17:15:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.675 { 00:06:01.675 "nbd_device": "/dev/nbd0", 00:06:01.675 "bdev_name": "Malloc0" 00:06:01.675 }, 00:06:01.675 { 00:06:01.675 "nbd_device": "/dev/nbd1", 00:06:01.675 "bdev_name": "Malloc1" 00:06:01.675 } 00:06:01.675 ]' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.675 { 00:06:01.675 "nbd_device": "/dev/nbd0", 00:06:01.675 "bdev_name": "Malloc0" 00:06:01.675 }, 00:06:01.675 { 00:06:01.675 "nbd_device": "/dev/nbd1", 00:06:01.675 "bdev_name": "Malloc1" 00:06:01.675 } 00:06:01.675 ]' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.675 /dev/nbd1' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.675 /dev/nbd1' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.675 256+0 records in 00:06:01.675 256+0 records out 00:06:01.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107074 s, 97.9 MB/s 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.675 256+0 records in 00:06:01.675 256+0 records out 00:06:01.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195926 s, 53.5 MB/s 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.675 256+0 records in 00:06:01.675 256+0 records out 00:06:01.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205269 s, 51.1 MB/s 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.675 17:15:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@51 -- # local i 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.676 17:15:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.934 17:15:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@41 -- # break 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.935 17:15:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.193 17:15:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.193 17:15:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.193 17:15:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.193 17:15:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@41 -- # break 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.194 17:15:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@65 -- # true 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.453 17:15:21 -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.453 17:15:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.453 17:15:22 -- event/event.sh@35 -- # sleep 3 00:06:02.713 [2024-11-09 17:15:22.382435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.713 [2024-11-09 17:15:22.446090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.713 [2024-11-09 17:15:22.446093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.972 [2024-11-09 17:15:22.487708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.972 [2024-11-09 17:15:22.487745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.506 17:15:25 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.506 17:15:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.506 spdk_app_start Round 1 00:06:05.506 17:15:25 -- event/event.sh@25 -- # waitforlisten 2528299 /var/tmp/spdk-nbd.sock 00:06:05.506 17:15:25 -- common/autotest_common.sh@829 -- # '[' -z 2528299 ']' 00:06:05.506 17:15:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.506 17:15:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.506 17:15:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.506 17:15:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.506 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.765 17:15:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.765 17:15:25 -- common/autotest_common.sh@862 -- # return 0 00:06:05.765 17:15:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.765 Malloc0 00:06:05.765 17:15:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.024 Malloc1 00:06:06.024 17:15:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.024 17:15:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.283 /dev/nbd0 00:06:06.283 17:15:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.283 17:15:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.283 17:15:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:06.283 17:15:25 -- common/autotest_common.sh@867 -- # local i 00:06:06.283 17:15:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.283 17:15:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.283 17:15:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:06.283 17:15:25 -- common/autotest_common.sh@871 -- # break 00:06:06.283 17:15:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.283 17:15:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.283 17:15:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.283 1+0 records in 00:06:06.283 1+0 records out 00:06:06.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237594 s, 17.2 MB/s 00:06:06.283 17:15:25 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.283 17:15:25 -- common/autotest_common.sh@884 -- # size=4096 00:06:06.283 17:15:25 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.283 17:15:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.283 17:15:25 -- common/autotest_common.sh@887 -- # return 0 00:06:06.283 17:15:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.283 17:15:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.283 17:15:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.542 /dev/nbd1 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.542 17:15:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:06.542 17:15:26 -- common/autotest_common.sh@867 -- # local i 00:06:06.542 17:15:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.542 17:15:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.542 17:15:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:06.542 17:15:26 -- common/autotest_common.sh@871 -- # break 00:06:06.542 17:15:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.542 17:15:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.542 17:15:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.542 1+0 records in 00:06:06.542 1+0 records out 00:06:06.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250582 s, 16.3 MB/s 00:06:06.542 17:15:26 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.542 17:15:26 -- common/autotest_common.sh@884 -- # size=4096 00:06:06.542 17:15:26 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.542 17:15:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.542 17:15:26 -- common/autotest_common.sh@887 -- # return 0 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.542 17:15:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.543 17:15:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.802 { 00:06:06.802 "nbd_device": "/dev/nbd0", 00:06:06.802 "bdev_name": "Malloc0" 00:06:06.802 }, 00:06:06.802 { 00:06:06.802 "nbd_device": "/dev/nbd1", 00:06:06.802 "bdev_name": "Malloc1" 00:06:06.802 } 00:06:06.802 ]' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.802 { 00:06:06.802 "nbd_device": "/dev/nbd0", 00:06:06.802 "bdev_name": "Malloc0" 00:06:06.802 }, 00:06:06.802 { 00:06:06.802 "nbd_device": "/dev/nbd1", 00:06:06.802 "bdev_name": "Malloc1" 00:06:06.802 } 00:06:06.802 ]' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.802 /dev/nbd1' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.802 /dev/nbd1' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.802 256+0 records in 00:06:06.802 256+0 records out 00:06:06.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115822 s, 90.5 MB/s 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.802 256+0 records in 00:06:06.802 256+0 records out 00:06:06.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193574 s, 54.2 MB/s 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.802 256+0 records in 00:06:06.802 256+0 records out 00:06:06.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205635 s, 51.0 MB/s 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.802 17:15:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@41 -- # break 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.061 17:15:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@41 -- # break 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.319 17:15:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.319 17:15:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@65 -- # true 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.578 17:15:27 -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.578 17:15:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.578 17:15:27 -- event/event.sh@35 -- # sleep 3 00:06:07.837 [2024-11-09 17:15:27.498506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.837 [2024-11-09 17:15:27.559925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.837 [2024-11-09 17:15:27.559928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.837 [2024-11-09 17:15:27.601101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.837 [2024-11-09 17:15:27.601157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.126 17:15:30 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.126 17:15:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.126 spdk_app_start Round 2 00:06:11.126 17:15:30 -- event/event.sh@25 -- # waitforlisten 2528299 /var/tmp/spdk-nbd.sock 00:06:11.126 17:15:30 -- common/autotest_common.sh@829 -- # '[' -z 2528299 ']' 00:06:11.126 17:15:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.126 17:15:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.126 17:15:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.126 17:15:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.126 17:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.126 17:15:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.126 17:15:30 -- common/autotest_common.sh@862 -- # return 0 00:06:11.126 17:15:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.126 Malloc0 00:06:11.126 17:15:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.126 Malloc1 00:06:11.126 17:15:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.126 17:15:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.386 /dev/nbd0 00:06:11.386 17:15:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.386 17:15:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.386 17:15:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.386 17:15:31 -- common/autotest_common.sh@867 -- # local i 00:06:11.386 17:15:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.386 17:15:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.386 17:15:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.386 17:15:31 -- common/autotest_common.sh@871 -- # break 00:06:11.386 17:15:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.386 17:15:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.386 17:15:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.386 1+0 records in 00:06:11.386 1+0 records out 00:06:11.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287435 s, 14.3 MB/s 00:06:11.386 17:15:31 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:11.386 17:15:31 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.386 17:15:31 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:11.386 17:15:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.386 17:15:31 -- common/autotest_common.sh@887 -- # return 0 00:06:11.386 17:15:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.386 17:15:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.386 17:15:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.645 /dev/nbd1 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.645 17:15:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.645 17:15:31 -- common/autotest_common.sh@867 -- # local i 00:06:11.645 17:15:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.645 17:15:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.645 17:15:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.645 17:15:31 -- common/autotest_common.sh@871 -- # break 00:06:11.645 17:15:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.645 17:15:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.645 17:15:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.645 1+0 records in 00:06:11.645 1+0 records out 00:06:11.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233821 s, 17.5 MB/s 00:06:11.645 17:15:31 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:11.645 17:15:31 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.645 17:15:31 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:11.645 17:15:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.645 17:15:31 -- common/autotest_common.sh@887 -- # return 0 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.645 17:15:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.905 { 00:06:11.905 "nbd_device": "/dev/nbd0", 00:06:11.905 "bdev_name": "Malloc0" 00:06:11.905 }, 00:06:11.905 { 00:06:11.905 "nbd_device": "/dev/nbd1", 00:06:11.905 "bdev_name": "Malloc1" 00:06:11.905 } 00:06:11.905 ]' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.905 { 00:06:11.905 "nbd_device": "/dev/nbd0", 00:06:11.905 "bdev_name": "Malloc0" 00:06:11.905 }, 00:06:11.905 { 00:06:11.905 "nbd_device": "/dev/nbd1", 00:06:11.905 "bdev_name": "Malloc1" 00:06:11.905 } 00:06:11.905 ]' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.905 /dev/nbd1' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.905 /dev/nbd1' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107076 s, 97.9 MB/s 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196396 s, 53.4 MB/s 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.905 256+0 records in 00:06:11.905 256+0 records out 00:06:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201654 s, 52.0 MB/s 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.905 17:15:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@41 -- # break 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.165 17:15:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@41 -- # break 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.424 17:15:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.424 17:15:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.424 17:15:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.424 17:15:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@65 -- # true 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.684 17:15:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.684 17:15:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.684 17:15:32 -- event/event.sh@35 -- # sleep 3 00:06:12.943 [2024-11-09 17:15:32.609150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.943 [2024-11-09 17:15:32.669366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.943 [2024-11-09 17:15:32.669369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.943 [2024-11-09 17:15:32.710652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.943 [2024-11-09 17:15:32.710689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.233 17:15:35 -- event/event.sh@38 -- # waitforlisten 2528299 /var/tmp/spdk-nbd.sock 00:06:16.233 17:15:35 -- common/autotest_common.sh@829 -- # '[' -z 2528299 ']' 00:06:16.233 17:15:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.233 17:15:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.233 17:15:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.233 17:15:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.233 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.233 17:15:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.233 17:15:35 -- common/autotest_common.sh@862 -- # return 0 00:06:16.233 17:15:35 -- event/event.sh@39 -- # killprocess 2528299 00:06:16.233 17:15:35 -- common/autotest_common.sh@936 -- # '[' -z 2528299 ']' 00:06:16.233 17:15:35 -- common/autotest_common.sh@940 -- # kill -0 2528299 00:06:16.233 17:15:35 -- common/autotest_common.sh@941 -- # uname 00:06:16.233 17:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.233 17:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2528299 00:06:16.233 17:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.233 17:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.233 17:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2528299' 00:06:16.233 killing process with pid 2528299 00:06:16.233 17:15:35 -- common/autotest_common.sh@955 -- # kill 2528299 00:06:16.233 17:15:35 -- common/autotest_common.sh@960 -- # wait 2528299 00:06:16.233 spdk_app_start is called in Round 0. 00:06:16.233 Shutdown signal received, stop current app iteration 00:06:16.233 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:16.233 spdk_app_start is called in Round 1. 00:06:16.233 Shutdown signal received, stop current app iteration 00:06:16.233 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:16.233 spdk_app_start is called in Round 2. 00:06:16.233 Shutdown signal received, stop current app iteration 00:06:16.233 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:16.233 spdk_app_start is called in Round 3. 00:06:16.233 Shutdown signal received, stop current app iteration 00:06:16.233 17:15:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:16.233 17:15:35 -- event/event.sh@42 -- # return 0 00:06:16.233 00:06:16.233 real 0m16.493s 00:06:16.233 user 0m35.256s 00:06:16.233 sys 0m2.907s 00:06:16.233 17:15:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.233 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.233 ************************************ 00:06:16.233 END TEST app_repeat 00:06:16.233 ************************************ 00:06:16.233 17:15:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:16.233 17:15:35 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.233 17:15:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.233 17:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.233 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.233 ************************************ 00:06:16.233 START TEST cpu_locks 00:06:16.233 ************************************ 00:06:16.233 17:15:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.233 * Looking for test storage... 00:06:16.233 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:16.233 17:15:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:16.233 17:15:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:16.233 17:15:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:16.492 17:15:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:16.492 17:15:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:16.492 17:15:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:16.492 17:15:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:16.492 17:15:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:16.492 17:15:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:16.492 17:15:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.492 17:15:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:16.492 17:15:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:16.492 17:15:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:16.492 17:15:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:16.492 17:15:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.492 17:15:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.492 17:15:36 -- scripts/common.sh@344 -- # : 1 00:06:16.492 17:15:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.492 17:15:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.492 17:15:36 -- scripts/common.sh@364 -- # decimal 1 00:06:16.492 17:15:36 -- scripts/common.sh@352 -- # local d=1 00:06:16.492 17:15:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.492 17:15:36 -- scripts/common.sh@354 -- # echo 1 00:06:16.492 17:15:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.492 17:15:36 -- scripts/common.sh@365 -- # decimal 2 00:06:16.492 17:15:36 -- scripts/common.sh@352 -- # local d=2 00:06:16.492 17:15:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.492 17:15:36 -- scripts/common.sh@354 -- # echo 2 00:06:16.492 17:15:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.492 17:15:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.492 17:15:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.492 17:15:36 -- scripts/common.sh@367 -- # return 0 00:06:16.492 17:15:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.492 17:15:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.492 --rc genhtml_branch_coverage=1 00:06:16.492 --rc genhtml_function_coverage=1 00:06:16.492 --rc genhtml_legend=1 00:06:16.492 --rc geninfo_all_blocks=1 00:06:16.492 --rc geninfo_unexecuted_blocks=1 00:06:16.492 00:06:16.492 ' 00:06:16.492 17:15:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.492 --rc genhtml_branch_coverage=1 00:06:16.492 --rc genhtml_function_coverage=1 00:06:16.492 --rc genhtml_legend=1 00:06:16.492 --rc geninfo_all_blocks=1 00:06:16.492 --rc geninfo_unexecuted_blocks=1 00:06:16.492 00:06:16.492 ' 00:06:16.492 17:15:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.492 --rc genhtml_branch_coverage=1 00:06:16.492 --rc genhtml_function_coverage=1 00:06:16.492 --rc genhtml_legend=1 00:06:16.492 --rc geninfo_all_blocks=1 00:06:16.493 --rc geninfo_unexecuted_blocks=1 00:06:16.493 00:06:16.493 ' 00:06:16.493 17:15:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.493 --rc genhtml_branch_coverage=1 00:06:16.493 --rc genhtml_function_coverage=1 00:06:16.493 --rc genhtml_legend=1 00:06:16.493 --rc geninfo_all_blocks=1 00:06:16.493 --rc geninfo_unexecuted_blocks=1 00:06:16.493 00:06:16.493 ' 00:06:16.493 17:15:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:16.493 17:15:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:16.493 17:15:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:16.493 17:15:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:16.493 17:15:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.493 17:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.493 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:06:16.493 ************************************ 00:06:16.493 START TEST default_locks 00:06:16.493 ************************************ 00:06:16.493 17:15:36 -- common/autotest_common.sh@1114 -- # default_locks 00:06:16.493 17:15:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2531488 00:06:16.493 17:15:36 -- event/cpu_locks.sh@47 -- # waitforlisten 2531488 00:06:16.493 17:15:36 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.493 17:15:36 -- common/autotest_common.sh@829 -- # '[' -z 2531488 ']' 00:06:16.493 17:15:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.493 17:15:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.493 17:15:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.493 17:15:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.493 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:06:16.493 [2024-11-09 17:15:36.132257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.493 [2024-11-09 17:15:36.132310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531488 ] 00:06:16.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.493 [2024-11-09 17:15:36.199241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.752 [2024-11-09 17:15:36.271500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.752 [2024-11-09 17:15:36.271625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.320 17:15:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.320 17:15:36 -- common/autotest_common.sh@862 -- # return 0 00:06:17.320 17:15:36 -- event/cpu_locks.sh@49 -- # locks_exist 2531488 00:06:17.320 17:15:36 -- event/cpu_locks.sh@22 -- # lslocks -p 2531488 00:06:17.320 17:15:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.257 lslocks: write error 00:06:18.257 17:15:37 -- event/cpu_locks.sh@50 -- # killprocess 2531488 00:06:18.257 17:15:37 -- common/autotest_common.sh@936 -- # '[' -z 2531488 ']' 00:06:18.257 17:15:37 -- common/autotest_common.sh@940 -- # kill -0 2531488 00:06:18.257 17:15:37 -- common/autotest_common.sh@941 -- # uname 00:06:18.257 17:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.257 17:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2531488 00:06:18.257 17:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.257 17:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.257 17:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2531488' 00:06:18.257 killing process with pid 2531488 00:06:18.257 17:15:37 -- common/autotest_common.sh@955 -- # kill 2531488 00:06:18.257 17:15:37 -- common/autotest_common.sh@960 -- # wait 2531488 00:06:18.517 17:15:38 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2531488 00:06:18.517 17:15:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.517 17:15:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2531488 00:06:18.517 17:15:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.517 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.517 17:15:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.517 17:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.517 17:15:38 -- common/autotest_common.sh@653 -- # waitforlisten 2531488 00:06:18.517 17:15:38 -- common/autotest_common.sh@829 -- # '[' -z 2531488 ']' 00:06:18.517 17:15:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.517 17:15:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.517 17:15:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.517 17:15:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.517 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2531488) - No such process 00:06:18.517 ERROR: process (pid: 2531488) is no longer running 00:06:18.517 17:15:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.517 17:15:38 -- common/autotest_common.sh@862 -- # return 1 00:06:18.517 17:15:38 -- common/autotest_common.sh@653 -- # es=1 00:06:18.517 17:15:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.517 17:15:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.517 17:15:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.517 17:15:38 -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.517 17:15:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.517 17:15:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.517 17:15:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.517 00:06:18.517 real 0m2.006s 00:06:18.517 user 0m2.119s 00:06:18.517 sys 0m0.726s 00:06:18.517 17:15:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.517 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.517 ************************************ 00:06:18.517 END TEST default_locks 00:06:18.517 ************************************ 00:06:18.518 17:15:38 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.518 17:15:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.518 17:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.518 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 ************************************ 00:06:18.518 START TEST default_locks_via_rpc 00:06:18.518 ************************************ 00:06:18.518 17:15:38 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:18.518 17:15:38 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2531916 00:06:18.518 17:15:38 -- event/cpu_locks.sh@63 -- # waitforlisten 2531916 00:06:18.518 17:15:38 -- common/autotest_common.sh@829 -- # '[' -z 2531916 ']' 00:06:18.518 17:15:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.518 17:15:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.518 17:15:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.518 17:15:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.518 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 17:15:38 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.518 [2024-11-09 17:15:38.183022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.518 [2024-11-09 17:15:38.183073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531916 ] 00:06:18.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.518 [2024-11-09 17:15:38.250204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.777 [2024-11-09 17:15:38.323537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.777 [2024-11-09 17:15:38.323674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.348 17:15:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.348 17:15:38 -- common/autotest_common.sh@862 -- # return 0 00:06:19.348 17:15:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.348 17:15:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.348 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.348 17:15:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.348 17:15:38 -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.348 17:15:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.348 17:15:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.348 17:15:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.348 17:15:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.348 17:15:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.348 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.348 17:15:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.348 17:15:38 -- event/cpu_locks.sh@71 -- # locks_exist 2531916 00:06:19.348 17:15:38 -- event/cpu_locks.sh@22 -- # lslocks -p 2531916 00:06:19.348 17:15:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.917 17:15:39 -- event/cpu_locks.sh@73 -- # killprocess 2531916 00:06:19.917 17:15:39 -- common/autotest_common.sh@936 -- # '[' -z 2531916 ']' 00:06:19.917 17:15:39 -- common/autotest_common.sh@940 -- # kill -0 2531916 00:06:19.917 17:15:39 -- common/autotest_common.sh@941 -- # uname 00:06:19.917 17:15:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.917 17:15:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2531916 00:06:19.917 17:15:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.917 17:15:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.917 17:15:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2531916' 00:06:19.917 killing process with pid 2531916 00:06:19.917 17:15:39 -- common/autotest_common.sh@955 -- # kill 2531916 00:06:19.917 17:15:39 -- common/autotest_common.sh@960 -- # wait 2531916 00:06:20.177 00:06:20.177 real 0m1.781s 00:06:20.177 user 0m1.877s 00:06:20.177 sys 0m0.615s 00:06:20.177 17:15:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.177 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.177 ************************************ 00:06:20.177 END TEST default_locks_via_rpc 00:06:20.177 ************************************ 00:06:20.436 17:15:39 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.436 17:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.436 17:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.436 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.436 ************************************ 00:06:20.436 START TEST non_locking_app_on_locked_coremask 00:06:20.436 ************************************ 00:06:20.436 17:15:39 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:20.436 17:15:39 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2532352 00:06:20.436 17:15:39 -- event/cpu_locks.sh@81 -- # waitforlisten 2532352 /var/tmp/spdk.sock 00:06:20.436 17:15:39 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.436 17:15:39 -- common/autotest_common.sh@829 -- # '[' -z 2532352 ']' 00:06:20.436 17:15:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.436 17:15:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.436 17:15:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.436 17:15:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.436 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.436 [2024-11-09 17:15:40.016074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.436 [2024-11-09 17:15:40.016126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532352 ] 00:06:20.436 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.436 [2024-11-09 17:15:40.084783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.436 [2024-11-09 17:15:40.159355] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.436 [2024-11-09 17:15:40.159473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.375 17:15:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.375 17:15:40 -- common/autotest_common.sh@862 -- # return 0 00:06:21.375 17:15:40 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.375 17:15:40 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2532368 00:06:21.375 17:15:40 -- event/cpu_locks.sh@85 -- # waitforlisten 2532368 /var/tmp/spdk2.sock 00:06:21.375 17:15:40 -- common/autotest_common.sh@829 -- # '[' -z 2532368 ']' 00:06:21.375 17:15:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.375 17:15:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.375 17:15:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.375 17:15:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.375 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.375 [2024-11-09 17:15:40.851959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.375 [2024-11-09 17:15:40.852007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532368 ] 00:06:21.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.375 [2024-11-09 17:15:40.943510] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.375 [2024-11-09 17:15:40.943536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.375 [2024-11-09 17:15:41.087036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.375 [2024-11-09 17:15:41.087150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.943 17:15:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.943 17:15:41 -- common/autotest_common.sh@862 -- # return 0 00:06:21.943 17:15:41 -- event/cpu_locks.sh@87 -- # locks_exist 2532352 00:06:21.943 17:15:41 -- event/cpu_locks.sh@22 -- # lslocks -p 2532352 00:06:21.943 17:15:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.322 lslocks: write error 00:06:23.322 17:15:42 -- event/cpu_locks.sh@89 -- # killprocess 2532352 00:06:23.322 17:15:42 -- common/autotest_common.sh@936 -- # '[' -z 2532352 ']' 00:06:23.322 17:15:42 -- common/autotest_common.sh@940 -- # kill -0 2532352 00:06:23.322 17:15:42 -- common/autotest_common.sh@941 -- # uname 00:06:23.322 17:15:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.322 17:15:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2532352 00:06:23.322 17:15:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.322 17:15:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.322 17:15:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2532352' 00:06:23.322 killing process with pid 2532352 00:06:23.322 17:15:42 -- common/autotest_common.sh@955 -- # kill 2532352 00:06:23.322 17:15:42 -- common/autotest_common.sh@960 -- # wait 2532352 00:06:23.890 17:15:43 -- event/cpu_locks.sh@90 -- # killprocess 2532368 00:06:23.890 17:15:43 -- common/autotest_common.sh@936 -- # '[' -z 2532368 ']' 00:06:23.890 17:15:43 -- common/autotest_common.sh@940 -- # kill -0 2532368 00:06:23.890 17:15:43 -- common/autotest_common.sh@941 -- # uname 00:06:23.890 17:15:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.890 17:15:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2532368 00:06:23.890 17:15:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.890 17:15:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.890 17:15:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2532368' 00:06:23.890 killing process with pid 2532368 00:06:23.890 17:15:43 -- common/autotest_common.sh@955 -- # kill 2532368 00:06:23.890 17:15:43 -- common/autotest_common.sh@960 -- # wait 2532368 00:06:24.149 00:06:24.149 real 0m3.851s 00:06:24.149 user 0m4.135s 00:06:24.149 sys 0m1.265s 00:06:24.149 17:15:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.149 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 ************************************ 00:06:24.149 END TEST non_locking_app_on_locked_coremask 00:06:24.149 ************************************ 00:06:24.149 17:15:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.149 17:15:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.149 17:15:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.149 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 ************************************ 00:06:24.149 START TEST locking_app_on_unlocked_coremask 00:06:24.149 ************************************ 00:06:24.149 17:15:43 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:24.149 17:15:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2532941 00:06:24.149 17:15:43 -- event/cpu_locks.sh@99 -- # waitforlisten 2532941 /var/tmp/spdk.sock 00:06:24.149 17:15:43 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.149 17:15:43 -- common/autotest_common.sh@829 -- # '[' -z 2532941 ']' 00:06:24.149 17:15:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.149 17:15:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.149 17:15:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.149 17:15:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.149 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 [2024-11-09 17:15:43.914715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.149 [2024-11-09 17:15:43.914762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532941 ] 00:06:24.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.409 [2024-11-09 17:15:43.981920] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.409 [2024-11-09 17:15:43.981944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.409 [2024-11-09 17:15:44.054433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.409 [2024-11-09 17:15:44.054551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.978 17:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.978 17:15:44 -- common/autotest_common.sh@862 -- # return 0 00:06:24.978 17:15:44 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.978 17:15:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2533200 00:06:24.978 17:15:44 -- event/cpu_locks.sh@103 -- # waitforlisten 2533200 /var/tmp/spdk2.sock 00:06:24.978 17:15:44 -- common/autotest_common.sh@829 -- # '[' -z 2533200 ']' 00:06:24.978 17:15:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.978 17:15:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.978 17:15:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.978 17:15:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.978 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.237 [2024-11-09 17:15:44.751168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.237 [2024-11-09 17:15:44.751216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533200 ] 00:06:25.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.237 [2024-11-09 17:15:44.847119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.237 [2024-11-09 17:15:44.988663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.237 [2024-11-09 17:15:44.988779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.174 17:15:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.174 17:15:45 -- common/autotest_common.sh@862 -- # return 0 00:06:26.174 17:15:45 -- event/cpu_locks.sh@105 -- # locks_exist 2533200 00:06:26.174 17:15:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.174 17:15:45 -- event/cpu_locks.sh@22 -- # lslocks -p 2533200 00:06:26.433 lslocks: write error 00:06:26.433 17:15:46 -- event/cpu_locks.sh@107 -- # killprocess 2532941 00:06:26.433 17:15:46 -- common/autotest_common.sh@936 -- # '[' -z 2532941 ']' 00:06:26.433 17:15:46 -- common/autotest_common.sh@940 -- # kill -0 2532941 00:06:26.433 17:15:46 -- common/autotest_common.sh@941 -- # uname 00:06:26.433 17:15:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.433 17:15:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2532941 00:06:26.433 17:15:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.433 17:15:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.433 17:15:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2532941' 00:06:26.433 killing process with pid 2532941 00:06:26.433 17:15:46 -- common/autotest_common.sh@955 -- # kill 2532941 00:06:26.433 17:15:46 -- common/autotest_common.sh@960 -- # wait 2532941 00:06:27.371 17:15:46 -- event/cpu_locks.sh@108 -- # killprocess 2533200 00:06:27.371 17:15:46 -- common/autotest_common.sh@936 -- # '[' -z 2533200 ']' 00:06:27.371 17:15:46 -- common/autotest_common.sh@940 -- # kill -0 2533200 00:06:27.371 17:15:46 -- common/autotest_common.sh@941 -- # uname 00:06:27.371 17:15:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.371 17:15:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2533200 00:06:27.371 17:15:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.371 17:15:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.371 17:15:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2533200' 00:06:27.371 killing process with pid 2533200 00:06:27.371 17:15:46 -- common/autotest_common.sh@955 -- # kill 2533200 00:06:27.371 17:15:46 -- common/autotest_common.sh@960 -- # wait 2533200 00:06:27.630 00:06:27.630 real 0m3.325s 00:06:27.630 user 0m3.581s 00:06:27.630 sys 0m1.002s 00:06:27.630 17:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.630 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:27.630 ************************************ 00:06:27.630 END TEST locking_app_on_unlocked_coremask 00:06:27.630 ************************************ 00:06:27.630 17:15:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.630 17:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.630 17:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.630 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:27.630 ************************************ 00:06:27.630 START TEST locking_app_on_locked_coremask 00:06:27.631 ************************************ 00:06:27.631 17:15:47 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:27.631 17:15:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2533549 00:06:27.631 17:15:47 -- event/cpu_locks.sh@116 -- # waitforlisten 2533549 /var/tmp/spdk.sock 00:06:27.631 17:15:47 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.631 17:15:47 -- common/autotest_common.sh@829 -- # '[' -z 2533549 ']' 00:06:27.631 17:15:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.631 17:15:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.631 17:15:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.631 17:15:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.631 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:27.631 [2024-11-09 17:15:47.291577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.631 [2024-11-09 17:15:47.291630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533549 ] 00:06:27.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.631 [2024-11-09 17:15:47.360174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.890 [2024-11-09 17:15:47.426503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.890 [2024-11-09 17:15:47.426618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.458 17:15:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.458 17:15:48 -- common/autotest_common.sh@862 -- # return 0 00:06:28.458 17:15:48 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.458 17:15:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2533790 00:06:28.458 17:15:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2533790 /var/tmp/spdk2.sock 00:06:28.458 17:15:48 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.458 17:15:48 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2533790 /var/tmp/spdk2.sock 00:06:28.458 17:15:48 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.458 17:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.458 17:15:48 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.458 17:15:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.458 17:15:48 -- common/autotest_common.sh@653 -- # waitforlisten 2533790 /var/tmp/spdk2.sock 00:06:28.458 17:15:48 -- common/autotest_common.sh@829 -- # '[' -z 2533790 ']' 00:06:28.458 17:15:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.458 17:15:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.458 17:15:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.458 17:15:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.458 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.458 [2024-11-09 17:15:48.140884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.458 [2024-11-09 17:15:48.140933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533790 ] 00:06:28.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.718 [2024-11-09 17:15:48.233060] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2533549 has claimed it. 00:06:28.718 [2024-11-09 17:15:48.233101] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2533790) - No such process 00:06:29.285 ERROR: process (pid: 2533790) is no longer running 00:06:29.285 17:15:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.285 17:15:48 -- common/autotest_common.sh@862 -- # return 1 00:06:29.285 17:15:48 -- common/autotest_common.sh@653 -- # es=1 00:06:29.285 17:15:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.285 17:15:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.285 17:15:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.285 17:15:48 -- event/cpu_locks.sh@122 -- # locks_exist 2533549 00:06:29.285 17:15:48 -- event/cpu_locks.sh@22 -- # lslocks -p 2533549 00:06:29.285 17:15:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.855 lslocks: write error 00:06:29.855 17:15:49 -- event/cpu_locks.sh@124 -- # killprocess 2533549 00:06:29.855 17:15:49 -- common/autotest_common.sh@936 -- # '[' -z 2533549 ']' 00:06:29.855 17:15:49 -- common/autotest_common.sh@940 -- # kill -0 2533549 00:06:29.855 17:15:49 -- common/autotest_common.sh@941 -- # uname 00:06:29.855 17:15:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.855 17:15:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2533549 00:06:29.855 17:15:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.855 17:15:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.855 17:15:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2533549' 00:06:29.855 killing process with pid 2533549 00:06:29.855 17:15:49 -- common/autotest_common.sh@955 -- # kill 2533549 00:06:29.855 17:15:49 -- common/autotest_common.sh@960 -- # wait 2533549 00:06:30.166 00:06:30.166 real 0m2.511s 00:06:30.166 user 0m2.742s 00:06:30.166 sys 0m0.762s 00:06:30.166 17:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.166 17:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.166 ************************************ 00:06:30.166 END TEST locking_app_on_locked_coremask 00:06:30.166 ************************************ 00:06:30.166 17:15:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.166 17:15:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.166 17:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.166 17:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.166 ************************************ 00:06:30.166 START TEST locking_overlapped_coremask 00:06:30.166 ************************************ 00:06:30.166 17:15:49 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:30.166 17:15:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2534082 00:06:30.166 17:15:49 -- event/cpu_locks.sh@133 -- # waitforlisten 2534082 /var/tmp/spdk.sock 00:06:30.166 17:15:49 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.166 17:15:49 -- common/autotest_common.sh@829 -- # '[' -z 2534082 ']' 00:06:30.166 17:15:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.166 17:15:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.166 17:15:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.166 17:15:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.166 17:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.166 [2024-11-09 17:15:49.853421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.166 [2024-11-09 17:15:49.853499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534082 ] 00:06:30.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.166 [2024-11-09 17:15:49.923722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.424 [2024-11-09 17:15:49.996385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.424 [2024-11-09 17:15:49.996550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.424 [2024-11-09 17:15:49.996653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.424 [2024-11-09 17:15:49.996655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.992 17:15:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.992 17:15:50 -- common/autotest_common.sh@862 -- # return 0 00:06:30.992 17:15:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2534315 00:06:30.992 17:15:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2534315 /var/tmp/spdk2.sock 00:06:30.992 17:15:50 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.992 17:15:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.992 17:15:50 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2534315 /var/tmp/spdk2.sock 00:06:30.992 17:15:50 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.992 17:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.992 17:15:50 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.992 17:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.992 17:15:50 -- common/autotest_common.sh@653 -- # waitforlisten 2534315 /var/tmp/spdk2.sock 00:06:30.992 17:15:50 -- common/autotest_common.sh@829 -- # '[' -z 2534315 ']' 00:06:30.992 17:15:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.992 17:15:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.992 17:15:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.992 17:15:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.992 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.992 [2024-11-09 17:15:50.709660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.992 [2024-11-09 17:15:50.709710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534315 ] 00:06:30.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.251 [2024-11-09 17:15:50.809389] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2534082 has claimed it. 00:06:31.251 [2024-11-09 17:15:50.809433] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.819 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2534315) - No such process 00:06:31.819 ERROR: process (pid: 2534315) is no longer running 00:06:31.819 17:15:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.819 17:15:51 -- common/autotest_common.sh@862 -- # return 1 00:06:31.819 17:15:51 -- common/autotest_common.sh@653 -- # es=1 00:06:31.819 17:15:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.819 17:15:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.819 17:15:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.819 17:15:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.819 17:15:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.819 17:15:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.819 17:15:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.819 17:15:51 -- event/cpu_locks.sh@141 -- # killprocess 2534082 00:06:31.819 17:15:51 -- common/autotest_common.sh@936 -- # '[' -z 2534082 ']' 00:06:31.819 17:15:51 -- common/autotest_common.sh@940 -- # kill -0 2534082 00:06:31.819 17:15:51 -- common/autotest_common.sh@941 -- # uname 00:06:31.819 17:15:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.819 17:15:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2534082 00:06:31.819 17:15:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.819 17:15:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.819 17:15:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2534082' 00:06:31.819 killing process with pid 2534082 00:06:31.819 17:15:51 -- common/autotest_common.sh@955 -- # kill 2534082 00:06:31.819 17:15:51 -- common/autotest_common.sh@960 -- # wait 2534082 00:06:32.078 00:06:32.078 real 0m1.938s 00:06:32.078 user 0m5.437s 00:06:32.078 sys 0m0.443s 00:06:32.078 17:15:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.078 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 ************************************ 00:06:32.079 END TEST locking_overlapped_coremask 00:06:32.079 ************************************ 00:06:32.079 17:15:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.079 17:15:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.079 17:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.079 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.079 ************************************ 00:06:32.079 START TEST locking_overlapped_coremask_via_rpc 00:06:32.079 ************************************ 00:06:32.079 17:15:51 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:32.079 17:15:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2534413 00:06:32.079 17:15:51 -- event/cpu_locks.sh@149 -- # waitforlisten 2534413 /var/tmp/spdk.sock 00:06:32.079 17:15:51 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.079 17:15:51 -- common/autotest_common.sh@829 -- # '[' -z 2534413 ']' 00:06:32.079 17:15:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.079 17:15:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.079 17:15:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.079 17:15:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.079 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.079 [2024-11-09 17:15:51.840859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.079 [2024-11-09 17:15:51.840907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534413 ] 00:06:32.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.338 [2024-11-09 17:15:51.909262] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.338 [2024-11-09 17:15:51.909287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.338 [2024-11-09 17:15:51.983240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.338 [2024-11-09 17:15:51.983383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.338 [2024-11-09 17:15:51.983485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.338 [2024-11-09 17:15:51.983488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.957 17:15:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.957 17:15:52 -- common/autotest_common.sh@862 -- # return 0 00:06:32.957 17:15:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2534665 00:06:32.957 17:15:52 -- event/cpu_locks.sh@153 -- # waitforlisten 2534665 /var/tmp/spdk2.sock 00:06:32.957 17:15:52 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.957 17:15:52 -- common/autotest_common.sh@829 -- # '[' -z 2534665 ']' 00:06:32.957 17:15:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.957 17:15:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.957 17:15:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.957 17:15:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.957 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:32.957 [2024-11-09 17:15:52.691087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.957 [2024-11-09 17:15:52.691136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534665 ] 00:06:32.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.236 [2024-11-09 17:15:52.790911] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.236 [2024-11-09 17:15:52.790938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.237 [2024-11-09 17:15:52.928466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.237 [2024-11-09 17:15:52.928621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.237 [2024-11-09 17:15:52.928736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.237 [2024-11-09 17:15:52.928738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.805 17:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.805 17:15:53 -- common/autotest_common.sh@862 -- # return 0 00:06:33.805 17:15:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.805 17:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.805 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.805 17:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.805 17:15:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.805 17:15:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:33.805 17:15:53 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.805 17:15:53 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:33.805 17:15:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.805 17:15:53 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:33.805 17:15:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.805 17:15:53 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.805 17:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.805 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.805 [2024-11-09 17:15:53.520522] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2534413 has claimed it. 00:06:33.805 request: 00:06:33.805 { 00:06:33.805 "method": "framework_enable_cpumask_locks", 00:06:33.805 "req_id": 1 00:06:33.805 } 00:06:33.805 Got JSON-RPC error response 00:06:33.805 response: 00:06:33.805 { 00:06:33.805 "code": -32603, 00:06:33.805 "message": "Failed to claim CPU core: 2" 00:06:33.805 } 00:06:33.805 17:15:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.805 17:15:53 -- common/autotest_common.sh@653 -- # es=1 00:06:33.805 17:15:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.805 17:15:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.805 17:15:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.805 17:15:53 -- event/cpu_locks.sh@158 -- # waitforlisten 2534413 /var/tmp/spdk.sock 00:06:33.805 17:15:53 -- common/autotest_common.sh@829 -- # '[' -z 2534413 ']' 00:06:33.805 17:15:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.805 17:15:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.805 17:15:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.805 17:15:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.805 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.064 17:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.064 17:15:53 -- common/autotest_common.sh@862 -- # return 0 00:06:34.064 17:15:53 -- event/cpu_locks.sh@159 -- # waitforlisten 2534665 /var/tmp/spdk2.sock 00:06:34.064 17:15:53 -- common/autotest_common.sh@829 -- # '[' -z 2534665 ']' 00:06:34.064 17:15:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.064 17:15:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.064 17:15:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.064 17:15:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.064 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.324 17:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.324 17:15:53 -- common/autotest_common.sh@862 -- # return 0 00:06:34.324 17:15:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.324 17:15:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.324 17:15:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.324 17:15:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.324 00:06:34.324 real 0m2.142s 00:06:34.324 user 0m0.879s 00:06:34.324 sys 0m0.190s 00:06:34.324 17:15:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.324 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.324 ************************************ 00:06:34.324 END TEST locking_overlapped_coremask_via_rpc 00:06:34.324 ************************************ 00:06:34.324 17:15:53 -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.324 17:15:53 -- event/cpu_locks.sh@15 -- # [[ -z 2534413 ]] 00:06:34.324 17:15:53 -- event/cpu_locks.sh@15 -- # killprocess 2534413 00:06:34.324 17:15:53 -- common/autotest_common.sh@936 -- # '[' -z 2534413 ']' 00:06:34.324 17:15:53 -- common/autotest_common.sh@940 -- # kill -0 2534413 00:06:34.324 17:15:53 -- common/autotest_common.sh@941 -- # uname 00:06:34.324 17:15:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.324 17:15:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2534413 00:06:34.324 17:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.324 17:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.324 17:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2534413' 00:06:34.324 killing process with pid 2534413 00:06:34.324 17:15:54 -- common/autotest_common.sh@955 -- # kill 2534413 00:06:34.324 17:15:54 -- common/autotest_common.sh@960 -- # wait 2534413 00:06:34.893 17:15:54 -- event/cpu_locks.sh@16 -- # [[ -z 2534665 ]] 00:06:34.893 17:15:54 -- event/cpu_locks.sh@16 -- # killprocess 2534665 00:06:34.893 17:15:54 -- common/autotest_common.sh@936 -- # '[' -z 2534665 ']' 00:06:34.893 17:15:54 -- common/autotest_common.sh@940 -- # kill -0 2534665 00:06:34.893 17:15:54 -- common/autotest_common.sh@941 -- # uname 00:06:34.893 17:15:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.893 17:15:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2534665 00:06:34.893 17:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:34.893 17:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:34.893 17:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2534665' 00:06:34.893 killing process with pid 2534665 00:06:34.893 17:15:54 -- common/autotest_common.sh@955 -- # kill 2534665 00:06:34.893 17:15:54 -- common/autotest_common.sh@960 -- # wait 2534665 00:06:35.152 17:15:54 -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.152 17:15:54 -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.152 17:15:54 -- event/cpu_locks.sh@15 -- # [[ -z 2534413 ]] 00:06:35.152 17:15:54 -- event/cpu_locks.sh@15 -- # killprocess 2534413 00:06:35.152 17:15:54 -- common/autotest_common.sh@936 -- # '[' -z 2534413 ']' 00:06:35.152 17:15:54 -- common/autotest_common.sh@940 -- # kill -0 2534413 00:06:35.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2534413) - No such process 00:06:35.152 17:15:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2534413 is not found' 00:06:35.152 Process with pid 2534413 is not found 00:06:35.152 17:15:54 -- event/cpu_locks.sh@16 -- # [[ -z 2534665 ]] 00:06:35.152 17:15:54 -- event/cpu_locks.sh@16 -- # killprocess 2534665 00:06:35.152 17:15:54 -- common/autotest_common.sh@936 -- # '[' -z 2534665 ']' 00:06:35.152 17:15:54 -- common/autotest_common.sh@940 -- # kill -0 2534665 00:06:35.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2534665) - No such process 00:06:35.152 17:15:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2534665 is not found' 00:06:35.152 Process with pid 2534665 is not found 00:06:35.152 17:15:54 -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.152 00:06:35.152 real 0m18.913s 00:06:35.152 user 0m31.755s 00:06:35.152 sys 0m6.006s 00:06:35.152 17:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.152 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.152 ************************************ 00:06:35.152 END TEST cpu_locks 00:06:35.152 ************************************ 00:06:35.152 00:06:35.152 real 0m44.825s 00:06:35.152 user 1m24.283s 00:06:35.152 sys 0m9.987s 00:06:35.152 17:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.152 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.152 ************************************ 00:06:35.152 END TEST event 00:06:35.152 ************************************ 00:06:35.152 17:15:54 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:35.152 17:15:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.152 17:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.152 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.152 ************************************ 00:06:35.152 START TEST thread 00:06:35.152 ************************************ 00:06:35.152 17:15:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:35.412 * Looking for test storage... 00:06:35.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:35.412 17:15:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.412 17:15:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.412 17:15:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.412 17:15:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.412 17:15:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.412 17:15:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.412 17:15:55 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.412 17:15:55 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.412 17:15:55 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.412 17:15:55 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.412 17:15:55 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.412 17:15:55 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.412 17:15:55 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.412 17:15:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.412 17:15:55 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.412 17:15:55 -- scripts/common.sh@344 -- # : 1 00:06:35.412 17:15:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.412 17:15:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.412 17:15:55 -- scripts/common.sh@364 -- # decimal 1 00:06:35.412 17:15:55 -- scripts/common.sh@352 -- # local d=1 00:06:35.412 17:15:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.412 17:15:55 -- scripts/common.sh@354 -- # echo 1 00:06:35.412 17:15:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.412 17:15:55 -- scripts/common.sh@365 -- # decimal 2 00:06:35.412 17:15:55 -- scripts/common.sh@352 -- # local d=2 00:06:35.412 17:15:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.412 17:15:55 -- scripts/common.sh@354 -- # echo 2 00:06:35.412 17:15:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.412 17:15:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.412 17:15:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.412 17:15:55 -- scripts/common.sh@367 -- # return 0 00:06:35.412 17:15:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.412 --rc genhtml_branch_coverage=1 00:06:35.412 --rc genhtml_function_coverage=1 00:06:35.412 --rc genhtml_legend=1 00:06:35.412 --rc geninfo_all_blocks=1 00:06:35.412 --rc geninfo_unexecuted_blocks=1 00:06:35.412 00:06:35.412 ' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.412 --rc genhtml_branch_coverage=1 00:06:35.412 --rc genhtml_function_coverage=1 00:06:35.412 --rc genhtml_legend=1 00:06:35.412 --rc geninfo_all_blocks=1 00:06:35.412 --rc geninfo_unexecuted_blocks=1 00:06:35.412 00:06:35.412 ' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.412 --rc genhtml_branch_coverage=1 00:06:35.412 --rc genhtml_function_coverage=1 00:06:35.412 --rc genhtml_legend=1 00:06:35.412 --rc geninfo_all_blocks=1 00:06:35.412 --rc geninfo_unexecuted_blocks=1 00:06:35.412 00:06:35.412 ' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.412 --rc genhtml_branch_coverage=1 00:06:35.412 --rc genhtml_function_coverage=1 00:06:35.412 --rc genhtml_legend=1 00:06:35.412 --rc geninfo_all_blocks=1 00:06:35.412 --rc geninfo_unexecuted_blocks=1 00:06:35.412 00:06:35.412 ' 00:06:35.412 17:15:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.412 17:15:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:35.412 17:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.412 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:35.412 ************************************ 00:06:35.412 START TEST thread_poller_perf 00:06:35.412 ************************************ 00:06:35.412 17:15:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.412 [2024-11-09 17:15:55.095858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.412 [2024-11-09 17:15:55.095922] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535141 ] 00:06:35.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.412 [2024-11-09 17:15:55.165368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.672 [2024-11-09 17:15:55.232870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.672 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.609 [2024-11-09T16:15:56.379Z] ====================================== 00:06:36.609 [2024-11-09T16:15:56.379Z] busy:2511071456 (cyc) 00:06:36.609 [2024-11-09T16:15:56.379Z] total_run_count: 416000 00:06:36.609 [2024-11-09T16:15:56.379Z] tsc_hz: 2500000000 (cyc) 00:06:36.609 [2024-11-09T16:15:56.379Z] ====================================== 00:06:36.609 [2024-11-09T16:15:56.379Z] poller_cost: 6036 (cyc), 2414 (nsec) 00:06:36.609 00:06:36.609 real 0m1.249s 00:06:36.609 user 0m1.161s 00:06:36.609 sys 0m0.084s 00:06:36.609 17:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.609 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:06:36.609 ************************************ 00:06:36.609 END TEST thread_poller_perf 00:06:36.609 ************************************ 00:06:36.609 17:15:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.609 17:15:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:36.609 17:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.609 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:06:36.609 ************************************ 00:06:36.609 START TEST thread_poller_perf 00:06:36.609 ************************************ 00:06:36.609 17:15:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.868 [2024-11-09 17:15:56.390153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.868 [2024-11-09 17:15:56.390244] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535340 ] 00:06:36.868 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.868 [2024-11-09 17:15:56.461273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.868 [2024-11-09 17:15:56.527317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.868 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.247 [2024-11-09T16:15:58.017Z] ====================================== 00:06:38.247 [2024-11-09T16:15:58.017Z] busy:2502264166 (cyc) 00:06:38.247 [2024-11-09T16:15:58.017Z] total_run_count: 5577000 00:06:38.247 [2024-11-09T16:15:58.017Z] tsc_hz: 2500000000 (cyc) 00:06:38.247 [2024-11-09T16:15:58.017Z] ====================================== 00:06:38.247 [2024-11-09T16:15:58.017Z] poller_cost: 448 (cyc), 179 (nsec) 00:06:38.247 00:06:38.247 real 0m1.246s 00:06:38.247 user 0m1.156s 00:06:38.247 sys 0m0.086s 00:06:38.247 17:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.247 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.247 ************************************ 00:06:38.247 END TEST thread_poller_perf 00:06:38.247 ************************************ 00:06:38.247 17:15:57 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.247 00:06:38.247 real 0m2.778s 00:06:38.247 user 0m2.453s 00:06:38.247 sys 0m0.346s 00:06:38.247 17:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.247 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.247 ************************************ 00:06:38.247 END TEST thread 00:06:38.247 ************************************ 00:06:38.247 17:15:57 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:38.247 17:15:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.247 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.247 ************************************ 00:06:38.247 START TEST accel 00:06:38.247 ************************************ 00:06:38.247 17:15:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:38.247 * Looking for test storage... 00:06:38.247 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:38.247 17:15:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:38.247 17:15:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:38.247 17:15:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:38.247 17:15:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:38.247 17:15:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:38.247 17:15:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:38.247 17:15:57 -- scripts/common.sh@335 -- # IFS=.-: 00:06:38.247 17:15:57 -- scripts/common.sh@335 -- # read -ra ver1 00:06:38.247 17:15:57 -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.247 17:15:57 -- scripts/common.sh@336 -- # read -ra ver2 00:06:38.247 17:15:57 -- scripts/common.sh@337 -- # local 'op=<' 00:06:38.247 17:15:57 -- scripts/common.sh@339 -- # ver1_l=2 00:06:38.247 17:15:57 -- scripts/common.sh@340 -- # ver2_l=1 00:06:38.247 17:15:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:38.247 17:15:57 -- scripts/common.sh@343 -- # case "$op" in 00:06:38.247 17:15:57 -- scripts/common.sh@344 -- # : 1 00:06:38.247 17:15:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:38.247 17:15:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.247 17:15:57 -- scripts/common.sh@364 -- # decimal 1 00:06:38.247 17:15:57 -- scripts/common.sh@352 -- # local d=1 00:06:38.247 17:15:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.247 17:15:57 -- scripts/common.sh@354 -- # echo 1 00:06:38.247 17:15:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:38.247 17:15:57 -- scripts/common.sh@365 -- # decimal 2 00:06:38.247 17:15:57 -- scripts/common.sh@352 -- # local d=2 00:06:38.247 17:15:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.247 17:15:57 -- scripts/common.sh@354 -- # echo 2 00:06:38.247 17:15:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:38.247 17:15:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:38.247 17:15:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:38.247 17:15:57 -- scripts/common.sh@367 -- # return 0 00:06:38.247 17:15:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.247 --rc genhtml_branch_coverage=1 00:06:38.247 --rc genhtml_function_coverage=1 00:06:38.247 --rc genhtml_legend=1 00:06:38.247 --rc geninfo_all_blocks=1 00:06:38.247 --rc geninfo_unexecuted_blocks=1 00:06:38.247 00:06:38.247 ' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.247 --rc genhtml_branch_coverage=1 00:06:38.247 --rc genhtml_function_coverage=1 00:06:38.247 --rc genhtml_legend=1 00:06:38.247 --rc geninfo_all_blocks=1 00:06:38.247 --rc geninfo_unexecuted_blocks=1 00:06:38.247 00:06:38.247 ' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.247 --rc genhtml_branch_coverage=1 00:06:38.247 --rc genhtml_function_coverage=1 00:06:38.247 --rc genhtml_legend=1 00:06:38.247 --rc geninfo_all_blocks=1 00:06:38.247 --rc geninfo_unexecuted_blocks=1 00:06:38.247 00:06:38.247 ' 00:06:38.247 17:15:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.247 --rc genhtml_branch_coverage=1 00:06:38.247 --rc genhtml_function_coverage=1 00:06:38.247 --rc genhtml_legend=1 00:06:38.247 --rc geninfo_all_blocks=1 00:06:38.247 --rc geninfo_unexecuted_blocks=1 00:06:38.247 00:06:38.247 ' 00:06:38.247 17:15:57 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:38.247 17:15:57 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:38.247 17:15:57 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.247 17:15:57 -- accel/accel.sh@59 -- # spdk_tgt_pid=2535676 00:06:38.247 17:15:57 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:38.247 17:15:57 -- accel/accel.sh@60 -- # waitforlisten 2535676 00:06:38.247 17:15:57 -- common/autotest_common.sh@829 -- # '[' -z 2535676 ']' 00:06:38.247 17:15:57 -- accel/accel.sh@58 -- # build_accel_config 00:06:38.247 17:15:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.247 17:15:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.247 17:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.247 17:15:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.247 17:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.247 17:15:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.247 17:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.247 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.247 17:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.247 17:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.247 17:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.247 17:15:57 -- accel/accel.sh@42 -- # jq -r . 00:06:38.247 [2024-11-09 17:15:57.911087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.247 [2024-11-09 17:15:57.911139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535676 ] 00:06:38.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.247 [2024-11-09 17:15:57.980561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.507 [2024-11-09 17:15:58.053552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.507 [2024-11-09 17:15:58.053662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.075 17:15:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.075 17:15:58 -- common/autotest_common.sh@862 -- # return 0 00:06:39.075 17:15:58 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:39.075 17:15:58 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:39.075 17:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.075 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.075 17:15:58 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:39.075 17:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # IFS== 00:06:39.075 17:15:58 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.075 17:15:58 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.075 17:15:58 -- accel/accel.sh@67 -- # killprocess 2535676 00:06:39.075 17:15:58 -- common/autotest_common.sh@936 -- # '[' -z 2535676 ']' 00:06:39.075 17:15:58 -- common/autotest_common.sh@940 -- # kill -0 2535676 00:06:39.075 17:15:58 -- common/autotest_common.sh@941 -- # uname 00:06:39.075 17:15:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.075 17:15:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2535676 00:06:39.334 17:15:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.334 17:15:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.334 17:15:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2535676' 00:06:39.334 killing process with pid 2535676 00:06:39.334 17:15:58 -- common/autotest_common.sh@955 -- # kill 2535676 00:06:39.334 17:15:58 -- common/autotest_common.sh@960 -- # wait 2535676 00:06:39.594 17:15:59 -- accel/accel.sh@68 -- # trap - ERR 00:06:39.594 17:15:59 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:39.594 17:15:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:39.594 17:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.594 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:39.594 17:15:59 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:39.594 17:15:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:39.594 17:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.594 17:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.594 17:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.594 17:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.594 17:15:59 -- accel/accel.sh@42 -- # jq -r . 00:06:39.594 17:15:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.594 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:39.594 17:15:59 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:39.594 17:15:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:39.594 17:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.594 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:39.594 ************************************ 00:06:39.594 START TEST accel_missing_filename 00:06:39.594 ************************************ 00:06:39.594 17:15:59 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:39.594 17:15:59 -- common/autotest_common.sh@650 -- # local es=0 00:06:39.594 17:15:59 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:39.594 17:15:59 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:39.594 17:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.594 17:15:59 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:39.594 17:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.594 17:15:59 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:39.594 17:15:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:39.594 17:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.594 17:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.594 17:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.594 17:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.594 17:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.594 17:15:59 -- accel/accel.sh@42 -- # jq -r . 00:06:39.594 [2024-11-09 17:15:59.268706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.594 [2024-11-09 17:15:59.268769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535983 ] 00:06:39.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.594 [2024-11-09 17:15:59.335839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.854 [2024-11-09 17:15:59.402726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.854 [2024-11-09 17:15:59.443203] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.854 [2024-11-09 17:15:59.502937] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:39.854 A filename is required. 00:06:39.854 17:15:59 -- common/autotest_common.sh@653 -- # es=234 00:06:39.854 17:15:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.854 17:15:59 -- common/autotest_common.sh@662 -- # es=106 00:06:39.854 17:15:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:39.854 17:15:59 -- common/autotest_common.sh@670 -- # es=1 00:06:39.854 17:15:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.854 00:06:39.854 real 0m0.353s 00:06:39.854 user 0m0.263s 00:06:39.854 sys 0m0.126s 00:06:39.854 17:15:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.854 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:39.854 ************************************ 00:06:39.854 END TEST accel_missing_filename 00:06:39.854 ************************************ 00:06:40.113 17:15:59 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:40.113 17:15:59 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:40.113 17:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.113 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 ************************************ 00:06:40.113 START TEST accel_compress_verify 00:06:40.113 ************************************ 00:06:40.113 17:15:59 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:40.113 17:15:59 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.113 17:15:59 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:40.113 17:15:59 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:40.113 17:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.113 17:15:59 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:40.113 17:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.113 17:15:59 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:40.113 17:15:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:40.113 17:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.113 17:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.113 17:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.113 17:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.113 17:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.113 17:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.113 17:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.113 17:15:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.113 [2024-11-09 17:15:59.662071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.113 [2024-11-09 17:15:59.662132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536071 ] 00:06:40.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.113 [2024-11-09 17:15:59.730757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.113 [2024-11-09 17:15:59.796954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.113 [2024-11-09 17:15:59.837610] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.372 [2024-11-09 17:15:59.897781] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:40.372 00:06:40.372 Compression does not support the verify option, aborting. 00:06:40.372 17:15:59 -- common/autotest_common.sh@653 -- # es=161 00:06:40.372 17:15:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.372 17:15:59 -- common/autotest_common.sh@662 -- # es=33 00:06:40.372 17:15:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:40.372 17:15:59 -- common/autotest_common.sh@670 -- # es=1 00:06:40.372 17:15:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.372 00:06:40.372 real 0m0.354s 00:06:40.372 user 0m0.267s 00:06:40.372 sys 0m0.124s 00:06:40.372 17:15:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.372 17:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 END TEST accel_compress_verify 00:06:40.372 ************************************ 00:06:40.372 17:16:00 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:40.372 17:16:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.372 17:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.372 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 START TEST accel_wrong_workload 00:06:40.372 ************************************ 00:06:40.372 17:16:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:40.372 17:16:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.372 17:16:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:40.372 17:16:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.372 17:16:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:40.372 17:16:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:40.372 17:16:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.372 17:16:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.372 17:16:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.372 17:16:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.372 17:16:00 -- accel/accel.sh@42 -- # jq -r . 00:06:40.372 Unsupported workload type: foobar 00:06:40.372 [2024-11-09 17:16:00.060758] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:40.372 accel_perf options: 00:06:40.372 [-h help message] 00:06:40.372 [-q queue depth per core] 00:06:40.372 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.372 [-T number of threads per core 00:06:40.372 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.372 [-t time in seconds] 00:06:40.372 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.372 [ dif_verify, , dif_generate, dif_generate_copy 00:06:40.372 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.372 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.372 [-S for crc32c workload, use this seed value (default 0) 00:06:40.372 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.372 [-f for fill workload, use this BYTE value (default 255) 00:06:40.372 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.372 [-y verify result if this switch is on] 00:06:40.372 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.372 Can be used to spread operations across a wider range of memory. 00:06:40.372 17:16:00 -- common/autotest_common.sh@653 -- # es=1 00:06:40.372 17:16:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.372 17:16:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.372 17:16:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.372 00:06:40.372 real 0m0.036s 00:06:40.372 user 0m0.023s 00:06:40.372 sys 0m0.013s 00:06:40.372 17:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.372 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 END TEST accel_wrong_workload 00:06:40.372 ************************************ 00:06:40.372 Error: writing output failed: Broken pipe 00:06:40.372 17:16:00 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.372 17:16:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:40.372 17:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.372 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 START TEST accel_negative_buffers 00:06:40.372 ************************************ 00:06:40.372 17:16:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.372 17:16:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.372 17:16:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:40.372 17:16:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:40.372 17:16:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.372 17:16:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:40.372 17:16:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.372 17:16:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:40.372 17:16:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.372 17:16:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.372 17:16:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.372 17:16:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.372 17:16:00 -- accel/accel.sh@42 -- # jq -r . 00:06:40.631 -x option must be non-negative. 00:06:40.631 [2024-11-09 17:16:00.141582] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:40.631 accel_perf options: 00:06:40.631 [-h help message] 00:06:40.631 [-q queue depth per core] 00:06:40.631 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.631 [-T number of threads per core 00:06:40.631 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.631 [-t time in seconds] 00:06:40.631 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.631 [ dif_verify, , dif_generate, dif_generate_copy 00:06:40.631 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.631 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.631 [-S for crc32c workload, use this seed value (default 0) 00:06:40.631 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.631 [-f for fill workload, use this BYTE value (default 255) 00:06:40.631 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.631 [-y verify result if this switch is on] 00:06:40.631 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.631 Can be used to spread operations across a wider range of memory. 00:06:40.631 17:16:00 -- common/autotest_common.sh@653 -- # es=1 00:06:40.631 17:16:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.631 17:16:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.631 17:16:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.631 00:06:40.631 real 0m0.035s 00:06:40.631 user 0m0.018s 00:06:40.631 sys 0m0.018s 00:06:40.631 17:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.631 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.631 ************************************ 00:06:40.631 END TEST accel_negative_buffers 00:06:40.631 ************************************ 00:06:40.631 Error: writing output failed: Broken pipe 00:06:40.631 17:16:00 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:40.631 17:16:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.631 17:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.631 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:40.631 ************************************ 00:06:40.631 START TEST accel_crc32c 00:06:40.631 ************************************ 00:06:40.631 17:16:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:40.631 17:16:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.631 17:16:00 -- accel/accel.sh@17 -- # local accel_module 00:06:40.631 17:16:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:40.631 17:16:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:40.631 17:16:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.631 17:16:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.631 17:16:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.631 17:16:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.631 17:16:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.631 17:16:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.631 17:16:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.631 17:16:00 -- accel/accel.sh@42 -- # jq -r . 00:06:40.631 [2024-11-09 17:16:00.217688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.631 [2024-11-09 17:16:00.217752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536313 ] 00:06:40.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.631 [2024-11-09 17:16:00.288796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.631 [2024-11-09 17:16:00.358274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.009 17:16:01 -- accel/accel.sh@18 -- # out=' 00:06:42.009 SPDK Configuration: 00:06:42.009 Core mask: 0x1 00:06:42.009 00:06:42.009 Accel Perf Configuration: 00:06:42.009 Workload Type: crc32c 00:06:42.009 CRC-32C seed: 32 00:06:42.009 Transfer size: 4096 bytes 00:06:42.009 Vector count 1 00:06:42.009 Module: software 00:06:42.009 Queue depth: 32 00:06:42.009 Allocate depth: 32 00:06:42.009 # threads/core: 1 00:06:42.009 Run time: 1 seconds 00:06:42.009 Verify: Yes 00:06:42.009 00:06:42.009 Running for 1 seconds... 00:06:42.009 00:06:42.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.009 ------------------------------------------------------------------------------------ 00:06:42.009 0,0 588288/s 2298 MiB/s 0 0 00:06:42.009 ==================================================================================== 00:06:42.009 Total 588288/s 2298 MiB/s 0 0' 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:42.009 17:16:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:42.009 17:16:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.009 17:16:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.009 17:16:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.009 17:16:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.009 17:16:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.009 17:16:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.009 17:16:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.009 17:16:01 -- accel/accel.sh@42 -- # jq -r . 00:06:42.009 [2024-11-09 17:16:01.577331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.009 [2024-11-09 17:16:01.577398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536578 ] 00:06:42.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.009 [2024-11-09 17:16:01.644272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.009 [2024-11-09 17:16:01.707916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=0x1 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=crc32c 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=32 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=software 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=32 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=32 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=1 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val=Yes 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:42.009 17:16:01 -- accel/accel.sh@21 -- # val= 00:06:42.009 17:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # IFS=: 00:06:42.009 17:16:01 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@21 -- # val= 00:06:43.388 17:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:43.388 17:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:43.388 17:16:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.388 17:16:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:43.388 17:16:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.388 00:06:43.388 real 0m2.715s 00:06:43.388 user 0m2.486s 00:06:43.388 sys 0m0.239s 00:06:43.388 17:16:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.388 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.388 ************************************ 00:06:43.388 END TEST accel_crc32c 00:06:43.388 ************************************ 00:06:43.388 17:16:02 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:43.388 17:16:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:43.388 17:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.388 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.388 ************************************ 00:06:43.388 START TEST accel_crc32c_C2 00:06:43.388 ************************************ 00:06:43.388 17:16:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:43.388 17:16:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.388 17:16:02 -- accel/accel.sh@17 -- # local accel_module 00:06:43.388 17:16:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:43.388 17:16:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:43.388 17:16:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.388 17:16:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.388 17:16:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.388 17:16:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.388 17:16:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.388 17:16:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.388 17:16:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.388 17:16:02 -- accel/accel.sh@42 -- # jq -r . 00:06:43.388 [2024-11-09 17:16:02.970251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.388 [2024-11-09 17:16:02.970313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536778 ] 00:06:43.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.388 [2024-11-09 17:16:03.038965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.388 [2024-11-09 17:16:03.104622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.767 17:16:04 -- accel/accel.sh@18 -- # out=' 00:06:44.767 SPDK Configuration: 00:06:44.767 Core mask: 0x1 00:06:44.767 00:06:44.767 Accel Perf Configuration: 00:06:44.767 Workload Type: crc32c 00:06:44.767 CRC-32C seed: 0 00:06:44.767 Transfer size: 4096 bytes 00:06:44.767 Vector count 2 00:06:44.767 Module: software 00:06:44.767 Queue depth: 32 00:06:44.767 Allocate depth: 32 00:06:44.767 # threads/core: 1 00:06:44.767 Run time: 1 seconds 00:06:44.767 Verify: Yes 00:06:44.767 00:06:44.767 Running for 1 seconds... 00:06:44.767 00:06:44.767 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.767 ------------------------------------------------------------------------------------ 00:06:44.767 0,0 482912/s 3772 MiB/s 0 0 00:06:44.767 ==================================================================================== 00:06:44.767 Total 482912/s 1886 MiB/s 0 0' 00:06:44.767 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.767 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.767 17:16:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.767 17:16:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.767 17:16:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.767 17:16:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.767 17:16:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.767 17:16:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.767 17:16:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.767 17:16:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.767 17:16:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.767 17:16:04 -- accel/accel.sh@42 -- # jq -r . 00:06:44.767 [2024-11-09 17:16:04.323605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.768 [2024-11-09 17:16:04.323667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536961 ] 00:06:44.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.768 [2024-11-09 17:16:04.391732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.768 [2024-11-09 17:16:04.457315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=0x1 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=crc32c 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=0 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=software 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=32 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=32 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=1 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val=Yes 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.768 17:16:04 -- accel/accel.sh@21 -- # val= 00:06:44.768 17:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.768 17:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@21 -- # val= 00:06:46.147 17:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # IFS=: 00:06:46.147 17:16:05 -- accel/accel.sh@20 -- # read -r var val 00:06:46.147 17:16:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.147 17:16:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:46.147 17:16:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.147 00:06:46.147 real 0m2.711s 00:06:46.147 user 0m2.469s 00:06:46.147 sys 0m0.252s 00:06:46.147 17:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.147 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:06:46.147 ************************************ 00:06:46.147 END TEST accel_crc32c_C2 00:06:46.147 ************************************ 00:06:46.147 17:16:05 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:46.147 17:16:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.147 17:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.147 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:06:46.147 ************************************ 00:06:46.147 START TEST accel_copy 00:06:46.147 ************************************ 00:06:46.147 17:16:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:46.147 17:16:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.148 17:16:05 -- accel/accel.sh@17 -- # local accel_module 00:06:46.148 17:16:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:46.148 17:16:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:46.148 17:16:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.148 17:16:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.148 17:16:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.148 17:16:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.148 17:16:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.148 17:16:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.148 17:16:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.148 17:16:05 -- accel/accel.sh@42 -- # jq -r . 00:06:46.148 [2024-11-09 17:16:05.723399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.148 [2024-11-09 17:16:05.723483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537186 ] 00:06:46.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.148 [2024-11-09 17:16:05.792735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.148 [2024-11-09 17:16:05.858219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.527 17:16:07 -- accel/accel.sh@18 -- # out=' 00:06:47.527 SPDK Configuration: 00:06:47.527 Core mask: 0x1 00:06:47.527 00:06:47.527 Accel Perf Configuration: 00:06:47.527 Workload Type: copy 00:06:47.527 Transfer size: 4096 bytes 00:06:47.527 Vector count 1 00:06:47.527 Module: software 00:06:47.527 Queue depth: 32 00:06:47.527 Allocate depth: 32 00:06:47.527 # threads/core: 1 00:06:47.527 Run time: 1 seconds 00:06:47.527 Verify: Yes 00:06:47.527 00:06:47.527 Running for 1 seconds... 00:06:47.527 00:06:47.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.527 ------------------------------------------------------------------------------------ 00:06:47.527 0,0 455520/s 1779 MiB/s 0 0 00:06:47.527 ==================================================================================== 00:06:47.527 Total 455520/s 1779 MiB/s 0 0' 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:47.527 17:16:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:47.527 17:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.527 17:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.527 17:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.527 17:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.527 17:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.527 17:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.527 17:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.527 17:16:07 -- accel/accel.sh@42 -- # jq -r . 00:06:47.527 [2024-11-09 17:16:07.074488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.527 [2024-11-09 17:16:07.074550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537446 ] 00:06:47.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.527 [2024-11-09 17:16:07.141275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.527 [2024-11-09 17:16:07.205040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=0x1 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=copy 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=software 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=32 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=32 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=1 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val=Yes 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:47.527 17:16:07 -- accel/accel.sh@21 -- # val= 00:06:47.527 17:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # IFS=: 00:06:47.527 17:16:07 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.906 17:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.906 17:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.906 17:16:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.906 17:16:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:48.906 17:16:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.906 00:06:48.906 real 0m2.706s 00:06:48.906 user 0m2.454s 00:06:48.906 sys 0m0.260s 00:06:48.906 17:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.906 17:16:08 -- common/autotest_common.sh@10 -- # set +x 00:06:48.906 ************************************ 00:06:48.906 END TEST accel_copy 00:06:48.906 ************************************ 00:06:48.906 17:16:08 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.906 17:16:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:48.906 17:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.906 17:16:08 -- common/autotest_common.sh@10 -- # set +x 00:06:48.906 ************************************ 00:06:48.906 START TEST accel_fill 00:06:48.906 ************************************ 00:06:48.906 17:16:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.906 17:16:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.906 17:16:08 -- accel/accel.sh@17 -- # local accel_module 00:06:48.906 17:16:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.906 17:16:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.906 17:16:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.906 17:16:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.906 17:16:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.906 17:16:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.906 17:16:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.906 17:16:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.906 17:16:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.906 17:16:08 -- accel/accel.sh@42 -- # jq -r . 00:06:48.906 [2024-11-09 17:16:08.469775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.906 [2024-11-09 17:16:08.469846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537737 ] 00:06:48.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.906 [2024-11-09 17:16:08.536334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.906 [2024-11-09 17:16:08.601418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.285 17:16:09 -- accel/accel.sh@18 -- # out=' 00:06:50.285 SPDK Configuration: 00:06:50.285 Core mask: 0x1 00:06:50.285 00:06:50.285 Accel Perf Configuration: 00:06:50.285 Workload Type: fill 00:06:50.285 Fill pattern: 0x80 00:06:50.285 Transfer size: 4096 bytes 00:06:50.285 Vector count 1 00:06:50.285 Module: software 00:06:50.285 Queue depth: 64 00:06:50.285 Allocate depth: 64 00:06:50.285 # threads/core: 1 00:06:50.285 Run time: 1 seconds 00:06:50.285 Verify: Yes 00:06:50.285 00:06:50.285 Running for 1 seconds... 00:06:50.285 00:06:50.285 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.285 ------------------------------------------------------------------------------------ 00:06:50.285 0,0 706048/s 2758 MiB/s 0 0 00:06:50.285 ==================================================================================== 00:06:50.285 Total 706048/s 2758 MiB/s 0 0' 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.285 17:16:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:50.285 17:16:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.285 17:16:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.285 17:16:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.285 17:16:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.285 17:16:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.285 17:16:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.285 17:16:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.285 17:16:09 -- accel/accel.sh@42 -- # jq -r . 00:06:50.285 [2024-11-09 17:16:09.820706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.285 [2024-11-09 17:16:09.820771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538005 ] 00:06:50.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.285 [2024-11-09 17:16:09.888559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.285 [2024-11-09 17:16:09.952219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val=0x1 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val= 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val=fill 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:09 -- accel/accel.sh@21 -- # val=0x80 00:06:50.285 17:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val=software 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val=64 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val=64 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val=1 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.285 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.285 17:16:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.285 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.286 17:16:10 -- accel/accel.sh@21 -- # val=Yes 00:06:50.286 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.286 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.286 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.286 17:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.286 17:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.286 17:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@21 -- # val= 00:06:51.664 17:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # IFS=: 00:06:51.664 17:16:11 -- accel/accel.sh@20 -- # read -r var val 00:06:51.664 17:16:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.664 17:16:11 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:51.664 17:16:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.664 00:06:51.664 real 0m2.707s 00:06:51.664 user 0m2.472s 00:06:51.665 sys 0m0.244s 00:06:51.665 17:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.665 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:51.665 ************************************ 00:06:51.665 END TEST accel_fill 00:06:51.665 ************************************ 00:06:51.665 17:16:11 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:51.665 17:16:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:51.665 17:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.665 17:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:51.665 ************************************ 00:06:51.665 START TEST accel_copy_crc32c 00:06:51.665 ************************************ 00:06:51.665 17:16:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:51.665 17:16:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.665 17:16:11 -- accel/accel.sh@17 -- # local accel_module 00:06:51.665 17:16:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:51.665 17:16:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:51.665 17:16:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.665 17:16:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.665 17:16:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.665 17:16:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.665 17:16:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.665 17:16:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.665 17:16:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.665 17:16:11 -- accel/accel.sh@42 -- # jq -r . 00:06:51.665 [2024-11-09 17:16:11.218318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.665 [2024-11-09 17:16:11.218391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538286 ] 00:06:51.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.665 [2024-11-09 17:16:11.287746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.665 [2024-11-09 17:16:11.350178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.044 17:16:12 -- accel/accel.sh@18 -- # out=' 00:06:53.044 SPDK Configuration: 00:06:53.044 Core mask: 0x1 00:06:53.044 00:06:53.044 Accel Perf Configuration: 00:06:53.044 Workload Type: copy_crc32c 00:06:53.044 CRC-32C seed: 0 00:06:53.044 Vector size: 4096 bytes 00:06:53.044 Transfer size: 4096 bytes 00:06:53.044 Vector count 1 00:06:53.044 Module: software 00:06:53.044 Queue depth: 32 00:06:53.044 Allocate depth: 32 00:06:53.044 # threads/core: 1 00:06:53.044 Run time: 1 seconds 00:06:53.044 Verify: Yes 00:06:53.044 00:06:53.044 Running for 1 seconds... 00:06:53.044 00:06:53.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.044 ------------------------------------------------------------------------------------ 00:06:53.044 0,0 337152/s 1317 MiB/s 0 0 00:06:53.044 ==================================================================================== 00:06:53.044 Total 337152/s 1317 MiB/s 0 0' 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:53.044 17:16:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:53.044 17:16:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.044 17:16:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.044 17:16:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.044 17:16:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.044 17:16:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.044 17:16:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.044 17:16:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.044 17:16:12 -- accel/accel.sh@42 -- # jq -r . 00:06:53.044 [2024-11-09 17:16:12.571805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.044 [2024-11-09 17:16:12.571870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538561 ] 00:06:53.044 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.044 [2024-11-09 17:16:12.639884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.044 [2024-11-09 17:16:12.703890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=0x1 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=0 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=software 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=32 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=32 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=1 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val=Yes 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.044 17:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.044 17:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.044 17:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@21 -- # val= 00:06:54.421 17:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.421 17:16:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.421 17:16:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.421 17:16:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:54.421 17:16:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.421 00:06:54.421 real 0m2.713s 00:06:54.421 user 0m2.471s 00:06:54.421 sys 0m0.253s 00:06:54.421 17:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.421 17:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.421 ************************************ 00:06:54.421 END TEST accel_copy_crc32c 00:06:54.421 ************************************ 00:06:54.421 17:16:13 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.421 17:16:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:54.422 17:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.422 17:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.422 ************************************ 00:06:54.422 START TEST accel_copy_crc32c_C2 00:06:54.422 ************************************ 00:06:54.422 17:16:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.422 17:16:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.422 17:16:13 -- accel/accel.sh@17 -- # local accel_module 00:06:54.422 17:16:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.422 17:16:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:54.422 17:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.422 17:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.422 17:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.422 17:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.422 17:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.422 17:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.422 17:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.422 17:16:13 -- accel/accel.sh@42 -- # jq -r . 00:06:54.422 [2024-11-09 17:16:13.969650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.422 [2024-11-09 17:16:13.969711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538844 ] 00:06:54.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.422 [2024-11-09 17:16:14.038096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.422 [2024-11-09 17:16:14.103576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.799 17:16:15 -- accel/accel.sh@18 -- # out=' 00:06:55.799 SPDK Configuration: 00:06:55.799 Core mask: 0x1 00:06:55.799 00:06:55.799 Accel Perf Configuration: 00:06:55.799 Workload Type: copy_crc32c 00:06:55.799 CRC-32C seed: 0 00:06:55.799 Vector size: 4096 bytes 00:06:55.799 Transfer size: 8192 bytes 00:06:55.799 Vector count 2 00:06:55.799 Module: software 00:06:55.799 Queue depth: 32 00:06:55.799 Allocate depth: 32 00:06:55.799 # threads/core: 1 00:06:55.799 Run time: 1 seconds 00:06:55.799 Verify: Yes 00:06:55.799 00:06:55.799 Running for 1 seconds... 00:06:55.799 00:06:55.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.799 ------------------------------------------------------------------------------------ 00:06:55.799 0,0 249024/s 1945 MiB/s 0 0 00:06:55.799 ==================================================================================== 00:06:55.799 Total 249024/s 972 MiB/s 0 0' 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:55.799 17:16:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:55.799 17:16:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.799 17:16:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.799 17:16:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.799 17:16:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.799 17:16:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.799 17:16:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.799 17:16:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.799 17:16:15 -- accel/accel.sh@42 -- # jq -r . 00:06:55.799 [2024-11-09 17:16:15.324172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.799 [2024-11-09 17:16:15.324236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539022 ] 00:06:55.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.799 [2024-11-09 17:16:15.392122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.799 [2024-11-09 17:16:15.455868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=0x1 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=0 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=software 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=32 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=32 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=1 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val=Yes 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.799 17:16:15 -- accel/accel.sh@21 -- # val= 00:06:55.799 17:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.799 17:16:15 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.178 17:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.178 17:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.178 17:16:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.178 17:16:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:57.178 17:16:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.178 00:06:57.178 real 0m2.713s 00:06:57.178 user 0m2.461s 00:06:57.178 sys 0m0.263s 00:06:57.178 17:16:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.178 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.178 ************************************ 00:06:57.178 END TEST accel_copy_crc32c_C2 00:06:57.178 ************************************ 00:06:57.178 17:16:16 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:57.178 17:16:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:57.178 17:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.178 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.178 ************************************ 00:06:57.178 START TEST accel_dualcast 00:06:57.178 ************************************ 00:06:57.178 17:16:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:57.178 17:16:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.178 17:16:16 -- accel/accel.sh@17 -- # local accel_module 00:06:57.178 17:16:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:57.178 17:16:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:57.178 17:16:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.178 17:16:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.178 17:16:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.178 17:16:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.178 17:16:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.178 17:16:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.178 17:16:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.178 17:16:16 -- accel/accel.sh@42 -- # jq -r . 00:06:57.178 [2024-11-09 17:16:16.731222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.178 [2024-11-09 17:16:16.731288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539249 ] 00:06:57.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.178 [2024-11-09 17:16:16.800753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.178 [2024-11-09 17:16:16.866789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.557 17:16:18 -- accel/accel.sh@18 -- # out=' 00:06:58.557 SPDK Configuration: 00:06:58.557 Core mask: 0x1 00:06:58.558 00:06:58.558 Accel Perf Configuration: 00:06:58.558 Workload Type: dualcast 00:06:58.558 Transfer size: 4096 bytes 00:06:58.558 Vector count 1 00:06:58.558 Module: software 00:06:58.558 Queue depth: 32 00:06:58.558 Allocate depth: 32 00:06:58.558 # threads/core: 1 00:06:58.558 Run time: 1 seconds 00:06:58.558 Verify: Yes 00:06:58.558 00:06:58.558 Running for 1 seconds... 00:06:58.558 00:06:58.558 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.558 ------------------------------------------------------------------------------------ 00:06:58.558 0,0 528960/s 2066 MiB/s 0 0 00:06:58.558 ==================================================================================== 00:06:58.558 Total 528960/s 2066 MiB/s 0 0' 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:58.558 17:16:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:58.558 17:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.558 17:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.558 17:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.558 17:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.558 17:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.558 17:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.558 17:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.558 17:16:18 -- accel/accel.sh@42 -- # jq -r . 00:06:58.558 [2024-11-09 17:16:18.074327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.558 [2024-11-09 17:16:18.074380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539433 ] 00:06:58.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.558 [2024-11-09 17:16:18.137375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.558 [2024-11-09 17:16:18.203048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=0x1 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=dualcast 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=software 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=32 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=32 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=1 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val=Yes 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.558 17:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.558 17:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.558 17:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@21 -- # val= 00:06:59.938 17:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 17:16:19 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 17:16:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.938 17:16:19 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:59.938 17:16:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.938 00:06:59.938 real 0m2.695s 00:06:59.938 user 0m2.467s 00:06:59.938 sys 0m0.237s 00:06:59.938 17:16:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.938 17:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:59.938 ************************************ 00:06:59.938 END TEST accel_dualcast 00:06:59.938 ************************************ 00:06:59.938 17:16:19 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:59.938 17:16:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:59.938 17:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.938 17:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:59.938 ************************************ 00:06:59.938 START TEST accel_compare 00:06:59.938 ************************************ 00:06:59.938 17:16:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:59.938 17:16:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.938 17:16:19 -- accel/accel.sh@17 -- # local accel_module 00:06:59.938 17:16:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:59.938 17:16:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:59.938 17:16:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.938 17:16:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.938 17:16:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.938 17:16:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.938 17:16:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.938 17:16:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.938 17:16:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.938 17:16:19 -- accel/accel.sh@42 -- # jq -r . 00:06:59.938 [2024-11-09 17:16:19.472607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.938 [2024-11-09 17:16:19.472669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539704 ] 00:06:59.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.938 [2024-11-09 17:16:19.539918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.938 [2024-11-09 17:16:19.605226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.316 17:16:20 -- accel/accel.sh@18 -- # out=' 00:07:01.316 SPDK Configuration: 00:07:01.316 Core mask: 0x1 00:07:01.316 00:07:01.316 Accel Perf Configuration: 00:07:01.316 Workload Type: compare 00:07:01.316 Transfer size: 4096 bytes 00:07:01.316 Vector count 1 00:07:01.316 Module: software 00:07:01.316 Queue depth: 32 00:07:01.316 Allocate depth: 32 00:07:01.316 # threads/core: 1 00:07:01.316 Run time: 1 seconds 00:07:01.316 Verify: Yes 00:07:01.316 00:07:01.316 Running for 1 seconds... 00:07:01.316 00:07:01.316 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.316 ------------------------------------------------------------------------------------ 00:07:01.316 0,0 612416/s 2392 MiB/s 0 0 00:07:01.316 ==================================================================================== 00:07:01.316 Total 612416/s 2392 MiB/s 0 0' 00:07:01.316 17:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.316 17:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.316 17:16:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:01.316 17:16:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.316 17:16:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.316 17:16:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.316 17:16:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.316 17:16:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.316 17:16:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.316 17:16:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.316 17:16:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.316 17:16:20 -- accel/accel.sh@42 -- # jq -r . 00:07:01.316 [2024-11-09 17:16:20.827225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.316 [2024-11-09 17:16:20.827286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539978 ] 00:07:01.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.316 [2024-11-09 17:16:20.896048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.316 [2024-11-09 17:16:20.960996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.316 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.316 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.316 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.316 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=0x1 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=compare 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=software 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=32 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=32 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=1 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val=Yes 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.317 17:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.317 17:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.317 17:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@21 -- # val= 00:07:02.695 17:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.695 17:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.695 17:16:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.695 17:16:22 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:02.695 17:16:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.695 00:07:02.695 real 0m2.714s 00:07:02.695 user 0m2.475s 00:07:02.695 sys 0m0.248s 00:07:02.695 17:16:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.695 17:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:02.695 ************************************ 00:07:02.695 END TEST accel_compare 00:07:02.695 ************************************ 00:07:02.695 17:16:22 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:02.695 17:16:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:02.695 17:16:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.695 17:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:02.695 ************************************ 00:07:02.695 START TEST accel_xor 00:07:02.695 ************************************ 00:07:02.695 17:16:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:02.695 17:16:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.695 17:16:22 -- accel/accel.sh@17 -- # local accel_module 00:07:02.695 17:16:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:02.695 17:16:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.695 17:16:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.695 17:16:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.695 17:16:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.695 17:16:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.695 17:16:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.695 17:16:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.695 17:16:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.695 17:16:22 -- accel/accel.sh@42 -- # jq -r . 00:07:02.695 [2024-11-09 17:16:22.217592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.695 [2024-11-09 17:16:22.217642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540264 ] 00:07:02.695 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.695 [2024-11-09 17:16:22.279121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.695 [2024-11-09 17:16:22.344802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.074 17:16:23 -- accel/accel.sh@18 -- # out=' 00:07:04.074 SPDK Configuration: 00:07:04.074 Core mask: 0x1 00:07:04.074 00:07:04.074 Accel Perf Configuration: 00:07:04.074 Workload Type: xor 00:07:04.074 Source buffers: 2 00:07:04.074 Transfer size: 4096 bytes 00:07:04.074 Vector count 1 00:07:04.074 Module: software 00:07:04.074 Queue depth: 32 00:07:04.074 Allocate depth: 32 00:07:04.074 # threads/core: 1 00:07:04.074 Run time: 1 seconds 00:07:04.074 Verify: Yes 00:07:04.074 00:07:04.074 Running for 1 seconds... 00:07:04.074 00:07:04.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.074 ------------------------------------------------------------------------------------ 00:07:04.074 0,0 492864/s 1925 MiB/s 0 0 00:07:04.074 ==================================================================================== 00:07:04.074 Total 492864/s 1925 MiB/s 0 0' 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:04.074 17:16:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:04.074 17:16:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.074 17:16:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.074 17:16:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.074 17:16:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.074 17:16:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.074 17:16:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.074 17:16:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.074 17:16:23 -- accel/accel.sh@42 -- # jq -r . 00:07:04.074 [2024-11-09 17:16:23.552547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.074 [2024-11-09 17:16:23.552597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540530 ] 00:07:04.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.074 [2024-11-09 17:16:23.614282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.074 [2024-11-09 17:16:23.678649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=0x1 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=xor 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=2 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=software 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=32 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=32 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val=1 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.074 17:16:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.074 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.074 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.075 17:16:23 -- accel/accel.sh@21 -- # val=Yes 00:07:04.075 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.075 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.075 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.075 17:16:23 -- accel/accel.sh@21 -- # val= 00:07:04.075 17:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.075 17:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@21 -- # val= 00:07:05.454 17:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.454 17:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.454 17:16:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.454 17:16:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:05.454 17:16:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.454 00:07:05.454 real 0m2.676s 00:07:05.454 user 0m2.461s 00:07:05.454 sys 0m0.223s 00:07:05.454 17:16:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.454 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.454 ************************************ 00:07:05.454 END TEST accel_xor 00:07:05.454 ************************************ 00:07:05.454 17:16:24 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:05.454 17:16:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:05.454 17:16:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.454 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.454 ************************************ 00:07:05.454 START TEST accel_xor 00:07:05.454 ************************************ 00:07:05.454 17:16:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:05.454 17:16:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.454 17:16:24 -- accel/accel.sh@17 -- # local accel_module 00:07:05.454 17:16:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:05.454 17:16:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:05.454 17:16:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.454 17:16:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.454 17:16:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.454 17:16:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.454 17:16:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.454 17:16:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.454 17:16:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.454 17:16:24 -- accel/accel.sh@42 -- # jq -r . 00:07:05.454 [2024-11-09 17:16:24.947469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.454 [2024-11-09 17:16:24.947533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540819 ] 00:07:05.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.454 [2024-11-09 17:16:25.016356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.454 [2024-11-09 17:16:25.081675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.834 17:16:26 -- accel/accel.sh@18 -- # out=' 00:07:06.834 SPDK Configuration: 00:07:06.834 Core mask: 0x1 00:07:06.834 00:07:06.834 Accel Perf Configuration: 00:07:06.834 Workload Type: xor 00:07:06.834 Source buffers: 3 00:07:06.834 Transfer size: 4096 bytes 00:07:06.834 Vector count 1 00:07:06.834 Module: software 00:07:06.834 Queue depth: 32 00:07:06.834 Allocate depth: 32 00:07:06.834 # threads/core: 1 00:07:06.834 Run time: 1 seconds 00:07:06.834 Verify: Yes 00:07:06.834 00:07:06.834 Running for 1 seconds... 00:07:06.834 00:07:06.834 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.834 ------------------------------------------------------------------------------------ 00:07:06.834 0,0 468064/s 1828 MiB/s 0 0 00:07:06.834 ==================================================================================== 00:07:06.834 Total 468064/s 1828 MiB/s 0 0' 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:06.834 17:16:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:06.834 17:16:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.834 17:16:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.834 17:16:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.834 17:16:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.834 17:16:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.834 17:16:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.834 17:16:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.834 17:16:26 -- accel/accel.sh@42 -- # jq -r . 00:07:06.834 [2024-11-09 17:16:26.302007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.834 [2024-11-09 17:16:26.302071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541045 ] 00:07:06.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.834 [2024-11-09 17:16:26.370850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.834 [2024-11-09 17:16:26.435841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val=0x1 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val=xor 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val=3 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.834 17:16:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.834 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.834 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val=software 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val=32 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val=32 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val=1 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val=Yes 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.835 17:16:26 -- accel/accel.sh@21 -- # val= 00:07:06.835 17:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.835 17:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@21 -- # val= 00:07:08.363 17:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:08.363 17:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.363 17:16:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.363 17:16:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:08.363 17:16:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.363 00:07:08.363 real 0m2.717s 00:07:08.363 user 0m2.472s 00:07:08.363 sys 0m0.254s 00:07:08.364 17:16:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.364 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.364 ************************************ 00:07:08.364 END TEST accel_xor 00:07:08.364 ************************************ 00:07:08.364 17:16:27 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:08.364 17:16:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:08.364 17:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.364 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.364 ************************************ 00:07:08.364 START TEST accel_dif_verify 00:07:08.364 ************************************ 00:07:08.364 17:16:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:08.364 17:16:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.364 17:16:27 -- accel/accel.sh@17 -- # local accel_module 00:07:08.364 17:16:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:08.364 17:16:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.364 17:16:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.364 17:16:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.364 17:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.364 17:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.364 17:16:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.364 17:16:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.364 17:16:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.364 17:16:27 -- accel/accel.sh@42 -- # jq -r . 00:07:08.364 [2024-11-09 17:16:27.706170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.364 [2024-11-09 17:16:27.706231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541270 ] 00:07:08.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.364 [2024-11-09 17:16:27.776402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.364 [2024-11-09 17:16:27.844344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.302 17:16:29 -- accel/accel.sh@18 -- # out=' 00:07:09.302 SPDK Configuration: 00:07:09.302 Core mask: 0x1 00:07:09.302 00:07:09.302 Accel Perf Configuration: 00:07:09.302 Workload Type: dif_verify 00:07:09.302 Vector size: 4096 bytes 00:07:09.302 Transfer size: 4096 bytes 00:07:09.302 Block size: 512 bytes 00:07:09.302 Metadata size: 8 bytes 00:07:09.302 Vector count 1 00:07:09.302 Module: software 00:07:09.302 Queue depth: 32 00:07:09.302 Allocate depth: 32 00:07:09.302 # threads/core: 1 00:07:09.302 Run time: 1 seconds 00:07:09.302 Verify: No 00:07:09.302 00:07:09.302 Running for 1 seconds... 00:07:09.302 00:07:09.302 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.302 ------------------------------------------------------------------------------------ 00:07:09.302 0,0 137120/s 543 MiB/s 0 0 00:07:09.302 ==================================================================================== 00:07:09.302 Total 137120/s 535 MiB/s 0 0' 00:07:09.302 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.302 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.302 17:16:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:09.302 17:16:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:09.303 17:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.303 17:16:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.303 17:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.303 17:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.303 17:16:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.303 17:16:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.303 17:16:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.303 17:16:29 -- accel/accel.sh@42 -- # jq -r . 00:07:09.303 [2024-11-09 17:16:29.063433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.303 [2024-11-09 17:16:29.063506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541472 ] 00:07:09.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.562 [2024-11-09 17:16:29.130517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.562 [2024-11-09 17:16:29.197168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.562 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.562 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=0x1 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=dif_verify 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=software 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=32 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=32 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=1 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val=No 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.563 17:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.563 17:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.563 17:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@21 -- # val= 00:07:10.942 17:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 17:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 17:16:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.942 17:16:30 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:10.942 17:16:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.942 00:07:10.942 real 0m2.720s 00:07:10.942 user 0m2.476s 00:07:10.942 sys 0m0.255s 00:07:10.942 17:16:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.942 17:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.942 ************************************ 00:07:10.942 END TEST accel_dif_verify 00:07:10.942 ************************************ 00:07:10.942 17:16:30 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:10.942 17:16:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:10.942 17:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.942 17:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.943 ************************************ 00:07:10.943 START TEST accel_dif_generate 00:07:10.943 ************************************ 00:07:10.943 17:16:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:10.943 17:16:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.943 17:16:30 -- accel/accel.sh@17 -- # local accel_module 00:07:10.943 17:16:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:10.943 17:16:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:10.943 17:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.943 17:16:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.943 17:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.943 17:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.943 17:16:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.943 17:16:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.943 17:16:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.943 17:16:30 -- accel/accel.sh@42 -- # jq -r . 00:07:10.943 [2024-11-09 17:16:30.468694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.943 [2024-11-09 17:16:30.468761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541695 ] 00:07:10.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.943 [2024-11-09 17:16:30.538235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.943 [2024-11-09 17:16:30.606006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.321 17:16:31 -- accel/accel.sh@18 -- # out=' 00:07:12.321 SPDK Configuration: 00:07:12.321 Core mask: 0x1 00:07:12.322 00:07:12.322 Accel Perf Configuration: 00:07:12.322 Workload Type: dif_generate 00:07:12.322 Vector size: 4096 bytes 00:07:12.322 Transfer size: 4096 bytes 00:07:12.322 Block size: 512 bytes 00:07:12.322 Metadata size: 8 bytes 00:07:12.322 Vector count 1 00:07:12.322 Module: software 00:07:12.322 Queue depth: 32 00:07:12.322 Allocate depth: 32 00:07:12.322 # threads/core: 1 00:07:12.322 Run time: 1 seconds 00:07:12.322 Verify: No 00:07:12.322 00:07:12.322 Running for 1 seconds... 00:07:12.322 00:07:12.322 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.322 ------------------------------------------------------------------------------------ 00:07:12.322 0,0 164992/s 654 MiB/s 0 0 00:07:12.322 ==================================================================================== 00:07:12.322 Total 164992/s 644 MiB/s 0 0' 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.322 17:16:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.322 17:16:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.322 17:16:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.322 17:16:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.322 17:16:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.322 17:16:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.322 17:16:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.322 17:16:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.322 17:16:31 -- accel/accel.sh@42 -- # jq -r . 00:07:12.322 [2024-11-09 17:16:31.823169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.322 [2024-11-09 17:16:31.823233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541955 ] 00:07:12.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.322 [2024-11-09 17:16:31.890159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.322 [2024-11-09 17:16:31.953590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val=0x1 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:31 -- accel/accel.sh@21 -- # val=dif_generate 00:07:12.322 17:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:31 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:12.322 17:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val=software 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val=32 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val=32 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val=1 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val=No 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 17:16:32 -- accel/accel.sh@21 -- # val= 00:07:12.322 17:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 17:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.701 17:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.701 17:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.701 17:16:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.701 17:16:33 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:13.701 17:16:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.701 00:07:13.701 real 0m2.708s 00:07:13.701 user 0m2.467s 00:07:13.701 sys 0m0.253s 00:07:13.701 17:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.701 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.701 ************************************ 00:07:13.701 END TEST accel_dif_generate 00:07:13.701 ************************************ 00:07:13.701 17:16:33 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:13.701 17:16:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:13.701 17:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.701 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.701 ************************************ 00:07:13.701 START TEST accel_dif_generate_copy 00:07:13.701 ************************************ 00:07:13.701 17:16:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:13.701 17:16:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.701 17:16:33 -- accel/accel.sh@17 -- # local accel_module 00:07:13.701 17:16:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.701 17:16:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.701 17:16:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.701 17:16:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.701 17:16:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.701 17:16:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.701 17:16:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.701 17:16:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.701 17:16:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.701 17:16:33 -- accel/accel.sh@42 -- # jq -r . 00:07:13.701 [2024-11-09 17:16:33.223991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.702 [2024-11-09 17:16:33.224073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542245 ] 00:07:13.702 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.702 [2024-11-09 17:16:33.293145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.702 [2024-11-09 17:16:33.355586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.080 17:16:34 -- accel/accel.sh@18 -- # out=' 00:07:15.080 SPDK Configuration: 00:07:15.080 Core mask: 0x1 00:07:15.080 00:07:15.080 Accel Perf Configuration: 00:07:15.080 Workload Type: dif_generate_copy 00:07:15.080 Vector size: 4096 bytes 00:07:15.080 Transfer size: 4096 bytes 00:07:15.080 Vector count 1 00:07:15.080 Module: software 00:07:15.080 Queue depth: 32 00:07:15.080 Allocate depth: 32 00:07:15.080 # threads/core: 1 00:07:15.080 Run time: 1 seconds 00:07:15.080 Verify: No 00:07:15.080 00:07:15.080 Running for 1 seconds... 00:07:15.080 00:07:15.080 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.080 ------------------------------------------------------------------------------------ 00:07:15.080 0,0 128448/s 509 MiB/s 0 0 00:07:15.080 ==================================================================================== 00:07:15.080 Total 128448/s 501 MiB/s 0 0' 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.080 17:16:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.080 17:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.080 17:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.080 17:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.080 17:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.080 17:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.080 17:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.080 17:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.080 17:16:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.080 [2024-11-09 17:16:34.573535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.080 [2024-11-09 17:16:34.573599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542513 ] 00:07:15.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.080 [2024-11-09 17:16:34.641172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.080 [2024-11-09 17:16:34.705365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val=0x1 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.080 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.080 17:16:34 -- accel/accel.sh@21 -- # val=software 00:07:15.080 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.080 17:16:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val=32 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val=32 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val=1 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val=No 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.081 17:16:34 -- accel/accel.sh@21 -- # val= 00:07:15.081 17:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.081 17:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@21 -- # val= 00:07:16.459 17:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 17:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 17:16:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.459 17:16:35 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:16.459 17:16:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.459 00:07:16.459 real 0m2.706s 00:07:16.459 user 0m2.463s 00:07:16.459 sys 0m0.253s 00:07:16.459 17:16:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.459 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.459 ************************************ 00:07:16.459 END TEST accel_dif_generate_copy 00:07:16.459 ************************************ 00:07:16.459 17:16:35 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:16.459 17:16:35 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.459 17:16:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:16.459 17:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.459 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.459 ************************************ 00:07:16.459 START TEST accel_comp 00:07:16.459 ************************************ 00:07:16.459 17:16:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.459 17:16:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.459 17:16:35 -- accel/accel.sh@17 -- # local accel_module 00:07:16.459 17:16:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.459 17:16:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.459 17:16:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.459 17:16:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.459 17:16:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.459 17:16:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.459 17:16:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.459 17:16:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.459 17:16:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.459 17:16:35 -- accel/accel.sh@42 -- # jq -r . 00:07:16.459 [2024-11-09 17:16:35.979293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.460 [2024-11-09 17:16:35.979365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542799 ] 00:07:16.460 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.460 [2024-11-09 17:16:36.048155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.460 [2024-11-09 17:16:36.111085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.839 17:16:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.839 00:07:17.839 SPDK Configuration: 00:07:17.839 Core mask: 0x1 00:07:17.839 00:07:17.839 Accel Perf Configuration: 00:07:17.839 Workload Type: compress 00:07:17.839 Transfer size: 4096 bytes 00:07:17.839 Vector count 1 00:07:17.839 Module: software 00:07:17.839 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.839 Queue depth: 32 00:07:17.839 Allocate depth: 32 00:07:17.839 # threads/core: 1 00:07:17.839 Run time: 1 seconds 00:07:17.839 Verify: No 00:07:17.839 00:07:17.839 Running for 1 seconds... 00:07:17.839 00:07:17.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.839 ------------------------------------------------------------------------------------ 00:07:17.839 0,0 61536/s 256 MiB/s 0 0 00:07:17.839 ==================================================================================== 00:07:17.839 Total 61536/s 240 MiB/s 0 0' 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.839 17:16:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.839 17:16:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.839 17:16:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.839 17:16:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.839 17:16:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.839 17:16:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.839 17:16:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.839 17:16:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.839 17:16:37 -- accel/accel.sh@42 -- # jq -r . 00:07:17.839 [2024-11-09 17:16:37.334085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.839 [2024-11-09 17:16:37.334150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543071 ] 00:07:17.839 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.839 [2024-11-09 17:16:37.402135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.839 [2024-11-09 17:16:37.465695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=0x1 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=compress 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=software 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=32 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=32 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=1 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val=No 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.839 17:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.839 17:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.839 17:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.220 17:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.220 17:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.220 17:16:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.220 17:16:38 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:19.220 17:16:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.220 00:07:19.220 real 0m2.717s 00:07:19.220 user 0m2.472s 00:07:19.220 sys 0m0.255s 00:07:19.220 17:16:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.220 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.220 ************************************ 00:07:19.220 END TEST accel_comp 00:07:19.220 ************************************ 00:07:19.220 17:16:38 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.220 17:16:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:19.220 17:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.220 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.220 ************************************ 00:07:19.220 START TEST accel_decomp 00:07:19.220 ************************************ 00:07:19.220 17:16:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.220 17:16:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.220 17:16:38 -- accel/accel.sh@17 -- # local accel_module 00:07:19.220 17:16:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.220 17:16:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.220 17:16:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.220 17:16:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.220 17:16:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.220 17:16:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.220 17:16:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.220 17:16:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.220 17:16:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.220 17:16:38 -- accel/accel.sh@42 -- # jq -r . 00:07:19.220 [2024-11-09 17:16:38.739461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.220 [2024-11-09 17:16:38.739523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543366 ] 00:07:19.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.220 [2024-11-09 17:16:38.808222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.220 [2024-11-09 17:16:38.873542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.599 17:16:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.599 00:07:20.599 SPDK Configuration: 00:07:20.599 Core mask: 0x1 00:07:20.599 00:07:20.599 Accel Perf Configuration: 00:07:20.599 Workload Type: decompress 00:07:20.599 Transfer size: 4096 bytes 00:07:20.599 Vector count 1 00:07:20.599 Module: software 00:07:20.599 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.599 Queue depth: 32 00:07:20.599 Allocate depth: 32 00:07:20.599 # threads/core: 1 00:07:20.599 Run time: 1 seconds 00:07:20.599 Verify: Yes 00:07:20.599 00:07:20.599 Running for 1 seconds... 00:07:20.599 00:07:20.599 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.599 ------------------------------------------------------------------------------------ 00:07:20.599 0,0 87392/s 161 MiB/s 0 0 00:07:20.599 ==================================================================================== 00:07:20.599 Total 87392/s 341 MiB/s 0 0' 00:07:20.599 17:16:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.599 17:16:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.599 17:16:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.599 17:16:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.599 17:16:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.599 17:16:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.599 17:16:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.599 17:16:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.599 17:16:40 -- accel/accel.sh@42 -- # jq -r . 00:07:20.599 [2024-11-09 17:16:40.084342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.599 [2024-11-09 17:16:40.084393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543576 ] 00:07:20.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.599 [2024-11-09 17:16:40.148754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.599 [2024-11-09 17:16:40.218267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=0x1 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=decompress 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=software 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=32 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=32 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=1 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val=Yes 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:20.599 17:16:40 -- accel/accel.sh@21 -- # val= 00:07:20.599 17:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:20.599 17:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.979 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@21 -- # val= 00:07:21.980 17:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.980 17:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.980 17:16:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.980 17:16:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.980 17:16:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.980 00:07:21.980 real 0m2.713s 00:07:21.980 user 0m2.480s 00:07:21.980 sys 0m0.244s 00:07:21.980 17:16:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.980 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:07:21.980 ************************************ 00:07:21.980 END TEST accel_decomp 00:07:21.980 ************************************ 00:07:21.980 17:16:41 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.980 17:16:41 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:21.980 17:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.980 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:07:21.980 ************************************ 00:07:21.980 START TEST accel_decmop_full 00:07:21.980 ************************************ 00:07:21.980 17:16:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.980 17:16:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.980 17:16:41 -- accel/accel.sh@17 -- # local accel_module 00:07:21.980 17:16:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.980 17:16:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.980 17:16:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.980 17:16:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.980 17:16:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.980 17:16:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.980 17:16:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.980 17:16:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.980 17:16:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.980 17:16:41 -- accel/accel.sh@42 -- # jq -r . 00:07:21.980 [2024-11-09 17:16:41.495952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.980 [2024-11-09 17:16:41.496039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543805 ] 00:07:21.980 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.980 [2024-11-09 17:16:41.566062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.980 [2024-11-09 17:16:41.633866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.358 17:16:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.358 00:07:23.358 SPDK Configuration: 00:07:23.358 Core mask: 0x1 00:07:23.358 00:07:23.358 Accel Perf Configuration: 00:07:23.358 Workload Type: decompress 00:07:23.358 Transfer size: 111250 bytes 00:07:23.358 Vector count 1 00:07:23.358 Module: software 00:07:23.358 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.358 Queue depth: 32 00:07:23.358 Allocate depth: 32 00:07:23.358 # threads/core: 1 00:07:23.358 Run time: 1 seconds 00:07:23.358 Verify: Yes 00:07:23.358 00:07:23.358 Running for 1 seconds... 00:07:23.358 00:07:23.358 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.358 ------------------------------------------------------------------------------------ 00:07:23.358 0,0 5824/s 240 MiB/s 0 0 00:07:23.358 ==================================================================================== 00:07:23.358 Total 5824/s 617 MiB/s 0 0' 00:07:23.358 17:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.358 17:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.358 17:16:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.358 17:16:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.358 17:16:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.358 17:16:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.358 17:16:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.358 17:16:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.358 17:16:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.358 17:16:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.358 17:16:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.358 17:16:42 -- accel/accel.sh@42 -- # jq -r . 00:07:23.358 [2024-11-09 17:16:42.855227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.358 [2024-11-09 17:16:42.855281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543982 ] 00:07:23.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.358 [2024-11-09 17:16:42.917646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.358 [2024-11-09 17:16:42.982759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=0x1 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=decompress 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=software 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=32 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=32 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=1 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val=Yes 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:23.359 17:16:43 -- accel/accel.sh@21 -- # val= 00:07:23.359 17:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:23.359 17:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@21 -- # val= 00:07:24.738 17:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.738 17:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.738 17:16:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.738 17:16:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.738 17:16:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.738 00:07:24.738 real 0m2.721s 00:07:24.738 user 0m2.486s 00:07:24.738 sys 0m0.243s 00:07:24.738 17:16:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.738 17:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:24.738 ************************************ 00:07:24.738 END TEST accel_decmop_full 00:07:24.738 ************************************ 00:07:24.738 17:16:44 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.738 17:16:44 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:24.738 17:16:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.738 17:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:24.738 ************************************ 00:07:24.738 START TEST accel_decomp_mcore 00:07:24.738 ************************************ 00:07:24.738 17:16:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.738 17:16:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.738 17:16:44 -- accel/accel.sh@17 -- # local accel_module 00:07:24.738 17:16:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.738 17:16:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:24.738 17:16:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.738 17:16:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.738 17:16:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.738 17:16:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.738 17:16:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.738 17:16:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.738 17:16:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.738 17:16:44 -- accel/accel.sh@42 -- # jq -r . 00:07:24.738 [2024-11-09 17:16:44.253444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.738 [2024-11-09 17:16:44.253524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544228 ] 00:07:24.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.738 [2024-11-09 17:16:44.323666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.738 [2024-11-09 17:16:44.392589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.738 [2024-11-09 17:16:44.392684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.738 [2024-11-09 17:16:44.392756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.738 [2024-11-09 17:16:44.392758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.117 17:16:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.117 00:07:26.117 SPDK Configuration: 00:07:26.117 Core mask: 0xf 00:07:26.117 00:07:26.117 Accel Perf Configuration: 00:07:26.117 Workload Type: decompress 00:07:26.117 Transfer size: 4096 bytes 00:07:26.117 Vector count 1 00:07:26.117 Module: software 00:07:26.117 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.117 Queue depth: 32 00:07:26.117 Allocate depth: 32 00:07:26.117 # threads/core: 1 00:07:26.117 Run time: 1 seconds 00:07:26.117 Verify: Yes 00:07:26.117 00:07:26.117 Running for 1 seconds... 00:07:26.117 00:07:26.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.117 ------------------------------------------------------------------------------------ 00:07:26.117 0,0 69792/s 128 MiB/s 0 0 00:07:26.117 3,0 73632/s 135 MiB/s 0 0 00:07:26.117 2,0 73568/s 135 MiB/s 0 0 00:07:26.117 1,0 73728/s 135 MiB/s 0 0 00:07:26.117 ==================================================================================== 00:07:26.117 Total 290720/s 1135 MiB/s 0 0' 00:07:26.117 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.117 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.117 17:16:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.117 17:16:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.117 17:16:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.117 17:16:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.117 17:16:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.117 17:16:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.117 17:16:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.117 17:16:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.117 17:16:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.117 17:16:45 -- accel/accel.sh@42 -- # jq -r . 00:07:26.118 [2024-11-09 17:16:45.623092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.118 [2024-11-09 17:16:45.623155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544505 ] 00:07:26.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.118 [2024-11-09 17:16:45.691391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.118 [2024-11-09 17:16:45.757862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.118 [2024-11-09 17:16:45.757956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.118 [2024-11-09 17:16:45.758016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.118 [2024-11-09 17:16:45.758017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=0xf 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=decompress 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=software 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=32 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=32 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=1 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val=Yes 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.118 17:16:45 -- accel/accel.sh@21 -- # val= 00:07:26.118 17:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:26.118 17:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@21 -- # val= 00:07:27.497 17:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.497 17:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.497 17:16:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.497 17:16:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.497 17:16:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.497 00:07:27.497 real 0m2.742s 00:07:27.497 user 0m9.152s 00:07:27.497 sys 0m0.261s 00:07:27.497 17:16:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.497 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.497 ************************************ 00:07:27.497 END TEST accel_decomp_mcore 00:07:27.497 ************************************ 00:07:27.497 17:16:46 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.497 17:16:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.497 17:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.497 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.497 ************************************ 00:07:27.497 START TEST accel_decomp_full_mcore 00:07:27.497 ************************************ 00:07:27.497 17:16:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.497 17:16:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.497 17:16:47 -- accel/accel.sh@17 -- # local accel_module 00:07:27.497 17:16:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.497 17:16:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.497 17:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.497 17:16:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.497 17:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.497 17:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.497 17:16:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.497 17:16:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.497 17:16:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.497 17:16:47 -- accel/accel.sh@42 -- # jq -r . 00:07:27.497 [2024-11-09 17:16:47.031976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.497 [2024-11-09 17:16:47.032040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544789 ] 00:07:27.497 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.497 [2024-11-09 17:16:47.100140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.497 [2024-11-09 17:16:47.167774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.497 [2024-11-09 17:16:47.167869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.497 [2024-11-09 17:16:47.167952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.497 [2024-11-09 17:16:47.167954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.875 17:16:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.875 00:07:28.875 SPDK Configuration: 00:07:28.875 Core mask: 0xf 00:07:28.875 00:07:28.875 Accel Perf Configuration: 00:07:28.875 Workload Type: decompress 00:07:28.875 Transfer size: 111250 bytes 00:07:28.875 Vector count 1 00:07:28.875 Module: software 00:07:28.875 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.875 Queue depth: 32 00:07:28.875 Allocate depth: 32 00:07:28.875 # threads/core: 1 00:07:28.875 Run time: 1 seconds 00:07:28.875 Verify: Yes 00:07:28.875 00:07:28.875 Running for 1 seconds... 00:07:28.875 00:07:28.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.875 ------------------------------------------------------------------------------------ 00:07:28.875 0,0 5376/s 222 MiB/s 0 0 00:07:28.875 3,0 5696/s 235 MiB/s 0 0 00:07:28.875 2,0 5696/s 235 MiB/s 0 0 00:07:28.875 1,0 5696/s 235 MiB/s 0 0 00:07:28.875 ==================================================================================== 00:07:28.875 Total 22464/s 2383 MiB/s 0 0' 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.875 17:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.875 17:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.875 17:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.875 17:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.875 17:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.875 17:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.875 17:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.875 17:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.875 17:16:48 -- accel/accel.sh@42 -- # jq -r . 00:07:28.875 [2024-11-09 17:16:48.405209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.875 [2024-11-09 17:16:48.405272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545067 ] 00:07:28.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.875 [2024-11-09 17:16:48.473432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.875 [2024-11-09 17:16:48.539709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.875 [2024-11-09 17:16:48.539802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.875 [2024-11-09 17:16:48.539891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.875 [2024-11-09 17:16:48.539893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=0xf 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=decompress 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=software 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=32 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=32 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=1 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val=Yes 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.875 17:16:48 -- accel/accel.sh@21 -- # val= 00:07:28.875 17:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.875 17:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@21 -- # val= 00:07:30.253 17:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.253 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.253 17:16:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.253 17:16:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.253 17:16:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.253 00:07:30.253 real 0m2.753s 00:07:30.253 user 0m9.199s 00:07:30.253 sys 0m0.269s 00:07:30.253 17:16:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.253 17:16:49 -- common/autotest_common.sh@10 -- # set +x 00:07:30.253 ************************************ 00:07:30.253 END TEST accel_decomp_full_mcore 00:07:30.253 ************************************ 00:07:30.253 17:16:49 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.253 17:16:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:30.253 17:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.253 17:16:49 -- common/autotest_common.sh@10 -- # set +x 00:07:30.253 ************************************ 00:07:30.253 START TEST accel_decomp_mthread 00:07:30.253 ************************************ 00:07:30.253 17:16:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.253 17:16:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.253 17:16:49 -- accel/accel.sh@17 -- # local accel_module 00:07:30.253 17:16:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.253 17:16:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.253 17:16:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.253 17:16:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.253 17:16:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.253 17:16:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.253 17:16:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.253 17:16:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.253 17:16:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.253 17:16:49 -- accel/accel.sh@42 -- # jq -r . 00:07:30.253 [2024-11-09 17:16:49.835494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.253 [2024-11-09 17:16:49.835578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545357 ] 00:07:30.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.253 [2024-11-09 17:16:49.906207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.253 [2024-11-09 17:16:49.971314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.631 17:16:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.631 00:07:31.631 SPDK Configuration: 00:07:31.631 Core mask: 0x1 00:07:31.631 00:07:31.631 Accel Perf Configuration: 00:07:31.631 Workload Type: decompress 00:07:31.631 Transfer size: 4096 bytes 00:07:31.631 Vector count 1 00:07:31.631 Module: software 00:07:31.631 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.631 Queue depth: 32 00:07:31.631 Allocate depth: 32 00:07:31.631 # threads/core: 2 00:07:31.631 Run time: 1 seconds 00:07:31.631 Verify: Yes 00:07:31.631 00:07:31.631 Running for 1 seconds... 00:07:31.631 00:07:31.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.631 ------------------------------------------------------------------------------------ 00:07:31.631 0,1 41408/s 76 MiB/s 0 0 00:07:31.631 0,0 41248/s 76 MiB/s 0 0 00:07:31.631 ==================================================================================== 00:07:31.631 Total 82656/s 322 MiB/s 0 0' 00:07:31.631 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.631 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.631 17:16:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.631 17:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.631 17:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.631 17:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.632 17:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.632 17:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.632 17:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.632 17:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.632 17:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.632 17:16:51 -- accel/accel.sh@42 -- # jq -r . 00:07:31.632 [2024-11-09 17:16:51.198120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.632 [2024-11-09 17:16:51.198191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545625 ] 00:07:31.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.632 [2024-11-09 17:16:51.265401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.632 [2024-11-09 17:16:51.330226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=0x1 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=decompress 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=software 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=32 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=32 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=2 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val=Yes 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:31.632 17:16:51 -- accel/accel.sh@21 -- # val= 00:07:31.632 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:31.632 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@21 -- # val= 00:07:33.009 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:33.009 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.009 17:16:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.009 17:16:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.009 17:16:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.009 00:07:33.009 real 0m2.724s 00:07:33.009 user 0m2.480s 00:07:33.009 sys 0m0.253s 00:07:33.009 17:16:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.009 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:33.009 ************************************ 00:07:33.009 END TEST accel_decomp_mthread 00:07:33.009 ************************************ 00:07:33.009 17:16:52 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.009 17:16:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:33.009 17:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.010 17:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:33.010 ************************************ 00:07:33.010 START TEST accel_deomp_full_mthread 00:07:33.010 ************************************ 00:07:33.010 17:16:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.010 17:16:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.010 17:16:52 -- accel/accel.sh@17 -- # local accel_module 00:07:33.010 17:16:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.010 17:16:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.010 17:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.010 17:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.010 17:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.010 17:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.010 17:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.010 17:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.010 17:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.010 17:16:52 -- accel/accel.sh@42 -- # jq -r . 00:07:33.010 [2024-11-09 17:16:52.577558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.010 [2024-11-09 17:16:52.577608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545871 ] 00:07:33.010 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.010 [2024-11-09 17:16:52.643693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.010 [2024-11-09 17:16:52.708402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.389 17:16:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.389 00:07:34.389 SPDK Configuration: 00:07:34.389 Core mask: 0x1 00:07:34.389 00:07:34.389 Accel Perf Configuration: 00:07:34.389 Workload Type: decompress 00:07:34.389 Transfer size: 111250 bytes 00:07:34.389 Vector count 1 00:07:34.389 Module: software 00:07:34.389 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.389 Queue depth: 32 00:07:34.389 Allocate depth: 32 00:07:34.389 # threads/core: 2 00:07:34.389 Run time: 1 seconds 00:07:34.389 Verify: Yes 00:07:34.389 00:07:34.389 Running for 1 seconds... 00:07:34.389 00:07:34.389 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.389 ------------------------------------------------------------------------------------ 00:07:34.390 0,1 2944/s 121 MiB/s 0 0 00:07:34.390 0,0 2944/s 121 MiB/s 0 0 00:07:34.390 ==================================================================================== 00:07:34.390 Total 5888/s 624 MiB/s 0 0' 00:07:34.390 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.390 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.390 17:16:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.390 17:16:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.390 17:16:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.390 17:16:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.390 17:16:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.390 17:16:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.390 17:16:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.390 17:16:53 -- accel/accel.sh@42 -- # jq -r . 00:07:34.390 [2024-11-09 17:16:53.934986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.390 [2024-11-09 17:16:53.935034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546055 ] 00:07:34.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.390 [2024-11-09 17:16:54.002278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.390 [2024-11-09 17:16:54.071156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=0x1 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=decompress 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=software 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=32 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=32 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=2 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val=Yes 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.390 17:16:54 -- accel/accel.sh@21 -- # val= 00:07:34.390 17:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.390 17:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.768 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.768 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.768 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.768 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.768 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.768 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.768 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.768 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.768 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.768 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.768 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.769 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.769 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.769 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.769 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.769 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.769 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.769 17:16:55 -- accel/accel.sh@21 -- # val= 00:07:35.769 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.769 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.769 17:16:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.769 17:16:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:35.769 17:16:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.769 00:07:35.769 real 0m2.723s 00:07:35.769 user 0m2.487s 00:07:35.769 sys 0m0.235s 00:07:35.769 17:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.769 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.769 ************************************ 00:07:35.769 END TEST accel_deomp_full_mthread 00:07:35.769 ************************************ 00:07:35.769 17:16:55 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:35.769 17:16:55 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.769 17:16:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:35.769 17:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.769 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.769 17:16:55 -- accel/accel.sh@129 -- # build_accel_config 00:07:35.769 17:16:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.769 17:16:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.769 17:16:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.769 17:16:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.769 17:16:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.769 17:16:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.769 17:16:55 -- accel/accel.sh@42 -- # jq -r . 00:07:35.769 ************************************ 00:07:35.769 START TEST accel_dif_functional_tests 00:07:35.769 ************************************ 00:07:35.769 17:16:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.769 [2024-11-09 17:16:55.366549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.769 [2024-11-09 17:16:55.366596] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546263 ] 00:07:35.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.769 [2024-11-09 17:16:55.433362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.769 [2024-11-09 17:16:55.499997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.769 [2024-11-09 17:16:55.500093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.769 [2024-11-09 17:16:55.500093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.028 00:07:36.028 00:07:36.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.028 http://cunit.sourceforge.net/ 00:07:36.028 00:07:36.028 00:07:36.028 Suite: accel_dif 00:07:36.028 Test: verify: DIF generated, GUARD check ...passed 00:07:36.028 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.028 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.028 Test: verify: DIF not generated, GUARD check ...[2024-11-09 17:16:55.567599] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.028 [2024-11-09 17:16:55.567642] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.028 passed 00:07:36.028 Test: verify: DIF not generated, APPTAG check ...[2024-11-09 17:16:55.567690] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.028 [2024-11-09 17:16:55.567708] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.028 passed 00:07:36.028 Test: verify: DIF not generated, REFTAG check ...[2024-11-09 17:16:55.567728] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.028 [2024-11-09 17:16:55.567744] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.028 passed 00:07:36.028 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.028 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-09 17:16:55.567787] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.028 passed 00:07:36.028 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.028 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.028 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.028 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-09 17:16:55.567891] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.028 passed 00:07:36.028 Test: generate copy: DIF generated, GUARD check ...passed 00:07:36.028 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.029 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.029 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.029 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.029 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.029 Test: generate copy: iovecs-len validate ...[2024-11-09 17:16:55.568063] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.029 passed 00:07:36.029 Test: generate copy: buffer alignment validate ...passed 00:07:36.029 00:07:36.029 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.029 suites 1 1 n/a 0 0 00:07:36.029 tests 20 20 20 0 0 00:07:36.029 asserts 204 204 204 0 n/a 00:07:36.029 00:07:36.029 Elapsed time = 0.000 seconds 00:07:36.029 00:07:36.029 real 0m0.428s 00:07:36.029 user 0m0.625s 00:07:36.029 sys 0m0.159s 00:07:36.029 17:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.029 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:36.029 ************************************ 00:07:36.029 END TEST accel_dif_functional_tests 00:07:36.029 ************************************ 00:07:36.029 00:07:36.029 real 0m58.077s 00:07:36.029 user 1m6.014s 00:07:36.029 sys 0m6.772s 00:07:36.029 17:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.029 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:36.029 ************************************ 00:07:36.029 END TEST accel 00:07:36.029 ************************************ 00:07:36.288 17:16:55 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.288 17:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.288 17:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.288 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:36.288 ************************************ 00:07:36.288 START TEST accel_rpc 00:07:36.288 ************************************ 00:07:36.288 17:16:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.288 * Looking for test storage... 00:07:36.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:36.288 17:16:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.288 17:16:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.288 17:16:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.288 17:16:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.288 17:16:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.288 17:16:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.288 17:16:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.288 17:16:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.288 17:16:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.288 17:16:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.288 17:16:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.288 17:16:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.288 17:16:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.288 17:16:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.288 17:16:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.288 17:16:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.288 17:16:55 -- scripts/common.sh@344 -- # : 1 00:07:36.288 17:16:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.288 17:16:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.288 17:16:56 -- scripts/common.sh@364 -- # decimal 1 00:07:36.288 17:16:56 -- scripts/common.sh@352 -- # local d=1 00:07:36.288 17:16:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.288 17:16:56 -- scripts/common.sh@354 -- # echo 1 00:07:36.288 17:16:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.288 17:16:56 -- scripts/common.sh@365 -- # decimal 2 00:07:36.288 17:16:56 -- scripts/common.sh@352 -- # local d=2 00:07:36.288 17:16:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.288 17:16:56 -- scripts/common.sh@354 -- # echo 2 00:07:36.288 17:16:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.288 17:16:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.288 17:16:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.288 17:16:56 -- scripts/common.sh@367 -- # return 0 00:07:36.288 17:16:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.288 17:16:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.288 --rc genhtml_branch_coverage=1 00:07:36.288 --rc genhtml_function_coverage=1 00:07:36.288 --rc genhtml_legend=1 00:07:36.288 --rc geninfo_all_blocks=1 00:07:36.288 --rc geninfo_unexecuted_blocks=1 00:07:36.288 00:07:36.288 ' 00:07:36.288 17:16:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.288 --rc genhtml_branch_coverage=1 00:07:36.288 --rc genhtml_function_coverage=1 00:07:36.289 --rc genhtml_legend=1 00:07:36.289 --rc geninfo_all_blocks=1 00:07:36.289 --rc geninfo_unexecuted_blocks=1 00:07:36.289 00:07:36.289 ' 00:07:36.289 17:16:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.289 --rc genhtml_branch_coverage=1 00:07:36.289 --rc genhtml_function_coverage=1 00:07:36.289 --rc genhtml_legend=1 00:07:36.289 --rc geninfo_all_blocks=1 00:07:36.289 --rc geninfo_unexecuted_blocks=1 00:07:36.289 00:07:36.289 ' 00:07:36.289 17:16:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.289 --rc genhtml_branch_coverage=1 00:07:36.289 --rc genhtml_function_coverage=1 00:07:36.289 --rc genhtml_legend=1 00:07:36.289 --rc geninfo_all_blocks=1 00:07:36.289 --rc geninfo_unexecuted_blocks=1 00:07:36.289 00:07:36.289 ' 00:07:36.289 17:16:56 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.289 17:16:56 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2546544 00:07:36.289 17:16:56 -- accel/accel_rpc.sh@15 -- # waitforlisten 2546544 00:07:36.289 17:16:56 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.289 17:16:56 -- common/autotest_common.sh@829 -- # '[' -z 2546544 ']' 00:07:36.289 17:16:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.289 17:16:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.289 17:16:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.289 17:16:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.289 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 [2024-11-09 17:16:56.067264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.548 [2024-11-09 17:16:56.067319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546544 ] 00:07:36.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.548 [2024-11-09 17:16:56.134075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.548 [2024-11-09 17:16:56.200706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.548 [2024-11-09 17:16:56.200843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.486 17:16:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.486 17:16:56 -- common/autotest_common.sh@862 -- # return 0 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.486 17:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.486 17:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.486 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 ************************************ 00:07:37.486 START TEST accel_assign_opcode 00:07:37.486 ************************************ 00:07:37.486 17:16:56 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.486 17:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.486 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 [2024-11-09 17:16:56.902905] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.486 17:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.486 17:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.486 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 [2024-11-09 17:16:56.914929] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.486 17:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.486 17:16:56 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.486 17:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.486 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 17:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.486 17:16:57 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:37.486 17:16:57 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:37.486 17:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.486 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 17:16:57 -- accel/accel_rpc.sh@42 -- # grep software 00:07:37.486 17:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.486 software 00:07:37.486 00:07:37.486 real 0m0.237s 00:07:37.486 user 0m0.048s 00:07:37.486 sys 0m0.012s 00:07:37.486 17:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.486 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:37.486 ************************************ 00:07:37.486 END TEST accel_assign_opcode 00:07:37.486 ************************************ 00:07:37.486 17:16:57 -- accel/accel_rpc.sh@55 -- # killprocess 2546544 00:07:37.486 17:16:57 -- common/autotest_common.sh@936 -- # '[' -z 2546544 ']' 00:07:37.486 17:16:57 -- common/autotest_common.sh@940 -- # kill -0 2546544 00:07:37.486 17:16:57 -- common/autotest_common.sh@941 -- # uname 00:07:37.486 17:16:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:37.486 17:16:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2546544 00:07:37.486 17:16:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:37.486 17:16:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:37.486 17:16:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2546544' 00:07:37.486 killing process with pid 2546544 00:07:37.486 17:16:57 -- common/autotest_common.sh@955 -- # kill 2546544 00:07:37.486 17:16:57 -- common/autotest_common.sh@960 -- # wait 2546544 00:07:38.056 00:07:38.056 real 0m1.730s 00:07:38.056 user 0m1.791s 00:07:38.056 sys 0m0.475s 00:07:38.056 17:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.056 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.056 ************************************ 00:07:38.056 END TEST accel_rpc 00:07:38.056 ************************************ 00:07:38.056 17:16:57 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.056 17:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.056 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.056 ************************************ 00:07:38.056 START TEST app_cmdline 00:07:38.056 ************************************ 00:07:38.056 17:16:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.056 * Looking for test storage... 00:07:38.056 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:38.056 17:16:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.056 17:16:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.056 17:16:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.056 17:16:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.056 17:16:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.056 17:16:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.056 17:16:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.056 17:16:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.056 17:16:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.056 17:16:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.056 17:16:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.056 17:16:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.056 17:16:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.056 17:16:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.056 17:16:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.056 17:16:57 -- scripts/common.sh@344 -- # : 1 00:07:38.056 17:16:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.056 17:16:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.056 17:16:57 -- scripts/common.sh@364 -- # decimal 1 00:07:38.056 17:16:57 -- scripts/common.sh@352 -- # local d=1 00:07:38.056 17:16:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.056 17:16:57 -- scripts/common.sh@354 -- # echo 1 00:07:38.056 17:16:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.056 17:16:57 -- scripts/common.sh@365 -- # decimal 2 00:07:38.056 17:16:57 -- scripts/common.sh@352 -- # local d=2 00:07:38.056 17:16:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.056 17:16:57 -- scripts/common.sh@354 -- # echo 2 00:07:38.056 17:16:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.056 17:16:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.056 17:16:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.056 17:16:57 -- scripts/common.sh@367 -- # return 0 00:07:38.056 17:16:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.056 --rc genhtml_branch_coverage=1 00:07:38.056 --rc genhtml_function_coverage=1 00:07:38.056 --rc genhtml_legend=1 00:07:38.056 --rc geninfo_all_blocks=1 00:07:38.056 --rc geninfo_unexecuted_blocks=1 00:07:38.056 00:07:38.056 ' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.056 --rc genhtml_branch_coverage=1 00:07:38.056 --rc genhtml_function_coverage=1 00:07:38.056 --rc genhtml_legend=1 00:07:38.056 --rc geninfo_all_blocks=1 00:07:38.056 --rc geninfo_unexecuted_blocks=1 00:07:38.056 00:07:38.056 ' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.056 --rc genhtml_branch_coverage=1 00:07:38.056 --rc genhtml_function_coverage=1 00:07:38.056 --rc genhtml_legend=1 00:07:38.056 --rc geninfo_all_blocks=1 00:07:38.056 --rc geninfo_unexecuted_blocks=1 00:07:38.056 00:07:38.056 ' 00:07:38.056 17:16:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.056 --rc genhtml_branch_coverage=1 00:07:38.056 --rc genhtml_function_coverage=1 00:07:38.056 --rc genhtml_legend=1 00:07:38.056 --rc geninfo_all_blocks=1 00:07:38.056 --rc geninfo_unexecuted_blocks=1 00:07:38.056 00:07:38.056 ' 00:07:38.056 17:16:57 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.056 17:16:57 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2546892 00:07:38.056 17:16:57 -- app/cmdline.sh@18 -- # waitforlisten 2546892 00:07:38.056 17:16:57 -- common/autotest_common.sh@829 -- # '[' -z 2546892 ']' 00:07:38.056 17:16:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.056 17:16:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.056 17:16:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.056 17:16:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.056 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.056 17:16:57 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.315 [2024-11-09 17:16:57.832445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.315 [2024-11-09 17:16:57.832498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546892 ] 00:07:38.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.315 [2024-11-09 17:16:57.900709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.315 [2024-11-09 17:16:57.973018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.315 [2024-11-09 17:16:57.973132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.882 17:16:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.882 17:16:58 -- common/autotest_common.sh@862 -- # return 0 00:07:38.882 17:16:58 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:39.141 { 00:07:39.141 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:39.141 "fields": { 00:07:39.141 "major": 24, 00:07:39.141 "minor": 1, 00:07:39.141 "patch": 1, 00:07:39.141 "suffix": "-pre", 00:07:39.141 "commit": "c13c99a5e" 00:07:39.141 } 00:07:39.141 } 00:07:39.141 17:16:58 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:39.141 17:16:58 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:39.141 17:16:58 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:39.141 17:16:58 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:39.141 17:16:58 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:39.141 17:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.141 17:16:58 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:39.141 17:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.141 17:16:58 -- app/cmdline.sh@26 -- # sort 00:07:39.141 17:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.141 17:16:58 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:39.141 17:16:58 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:39.141 17:16:58 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.141 17:16:58 -- common/autotest_common.sh@650 -- # local es=0 00:07:39.141 17:16:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.141 17:16:58 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.141 17:16:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.141 17:16:58 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.141 17:16:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.141 17:16:58 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.141 17:16:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.142 17:16:58 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.142 17:16:58 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.142 17:16:58 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.401 request: 00:07:39.401 { 00:07:39.401 "method": "env_dpdk_get_mem_stats", 00:07:39.401 "req_id": 1 00:07:39.401 } 00:07:39.401 Got JSON-RPC error response 00:07:39.401 response: 00:07:39.401 { 00:07:39.401 "code": -32601, 00:07:39.401 "message": "Method not found" 00:07:39.401 } 00:07:39.401 17:16:59 -- common/autotest_common.sh@653 -- # es=1 00:07:39.401 17:16:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.401 17:16:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.401 17:16:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.401 17:16:59 -- app/cmdline.sh@1 -- # killprocess 2546892 00:07:39.401 17:16:59 -- common/autotest_common.sh@936 -- # '[' -z 2546892 ']' 00:07:39.401 17:16:59 -- common/autotest_common.sh@940 -- # kill -0 2546892 00:07:39.401 17:16:59 -- common/autotest_common.sh@941 -- # uname 00:07:39.401 17:16:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:39.401 17:16:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2546892 00:07:39.401 17:16:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:39.401 17:16:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:39.401 17:16:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2546892' 00:07:39.401 killing process with pid 2546892 00:07:39.401 17:16:59 -- common/autotest_common.sh@955 -- # kill 2546892 00:07:39.401 17:16:59 -- common/autotest_common.sh@960 -- # wait 2546892 00:07:39.660 00:07:39.660 real 0m1.799s 00:07:39.660 user 0m2.094s 00:07:39.660 sys 0m0.492s 00:07:39.660 17:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.660 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.660 ************************************ 00:07:39.660 END TEST app_cmdline 00:07:39.660 ************************************ 00:07:39.920 17:16:59 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:39.920 17:16:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.920 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.920 ************************************ 00:07:39.920 START TEST version 00:07:39.920 ************************************ 00:07:39.920 17:16:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:39.920 * Looking for test storage... 00:07:39.920 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:39.920 17:16:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.920 17:16:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.920 17:16:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.920 17:16:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.920 17:16:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.920 17:16:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.920 17:16:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.920 17:16:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.920 17:16:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.920 17:16:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.920 17:16:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.920 17:16:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.920 17:16:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.920 17:16:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.920 17:16:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.920 17:16:59 -- scripts/common.sh@344 -- # : 1 00:07:39.920 17:16:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.920 17:16:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.920 17:16:59 -- scripts/common.sh@364 -- # decimal 1 00:07:39.920 17:16:59 -- scripts/common.sh@352 -- # local d=1 00:07:39.920 17:16:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.920 17:16:59 -- scripts/common.sh@354 -- # echo 1 00:07:39.920 17:16:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.920 17:16:59 -- scripts/common.sh@365 -- # decimal 2 00:07:39.920 17:16:59 -- scripts/common.sh@352 -- # local d=2 00:07:39.920 17:16:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.920 17:16:59 -- scripts/common.sh@354 -- # echo 2 00:07:39.920 17:16:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.920 17:16:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.920 17:16:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.920 17:16:59 -- scripts/common.sh@367 -- # return 0 00:07:39.920 17:16:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.920 --rc genhtml_branch_coverage=1 00:07:39.920 --rc genhtml_function_coverage=1 00:07:39.920 --rc genhtml_legend=1 00:07:39.920 --rc geninfo_all_blocks=1 00:07:39.920 --rc geninfo_unexecuted_blocks=1 00:07:39.920 00:07:39.920 ' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.920 --rc genhtml_branch_coverage=1 00:07:39.920 --rc genhtml_function_coverage=1 00:07:39.920 --rc genhtml_legend=1 00:07:39.920 --rc geninfo_all_blocks=1 00:07:39.920 --rc geninfo_unexecuted_blocks=1 00:07:39.920 00:07:39.920 ' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.920 --rc genhtml_branch_coverage=1 00:07:39.920 --rc genhtml_function_coverage=1 00:07:39.920 --rc genhtml_legend=1 00:07:39.920 --rc geninfo_all_blocks=1 00:07:39.920 --rc geninfo_unexecuted_blocks=1 00:07:39.920 00:07:39.920 ' 00:07:39.920 17:16:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.920 --rc genhtml_branch_coverage=1 00:07:39.920 --rc genhtml_function_coverage=1 00:07:39.920 --rc genhtml_legend=1 00:07:39.920 --rc geninfo_all_blocks=1 00:07:39.920 --rc geninfo_unexecuted_blocks=1 00:07:39.920 00:07:39.920 ' 00:07:39.920 17:16:59 -- app/version.sh@17 -- # get_header_version major 00:07:39.920 17:16:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.920 17:16:59 -- app/version.sh@14 -- # tr -d '"' 00:07:39.920 17:16:59 -- app/version.sh@14 -- # cut -f2 00:07:39.920 17:16:59 -- app/version.sh@17 -- # major=24 00:07:39.920 17:16:59 -- app/version.sh@18 -- # get_header_version minor 00:07:39.920 17:16:59 -- app/version.sh@14 -- # tr -d '"' 00:07:39.920 17:16:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.920 17:16:59 -- app/version.sh@14 -- # cut -f2 00:07:39.920 17:16:59 -- app/version.sh@18 -- # minor=1 00:07:39.920 17:16:59 -- app/version.sh@19 -- # get_header_version patch 00:07:39.920 17:16:59 -- app/version.sh@14 -- # tr -d '"' 00:07:39.921 17:16:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.921 17:16:59 -- app/version.sh@14 -- # cut -f2 00:07:39.921 17:16:59 -- app/version.sh@19 -- # patch=1 00:07:39.921 17:16:59 -- app/version.sh@20 -- # get_header_version suffix 00:07:39.921 17:16:59 -- app/version.sh@14 -- # tr -d '"' 00:07:39.921 17:16:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.921 17:16:59 -- app/version.sh@14 -- # cut -f2 00:07:39.921 17:16:59 -- app/version.sh@20 -- # suffix=-pre 00:07:39.921 17:16:59 -- app/version.sh@22 -- # version=24.1 00:07:39.921 17:16:59 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.921 17:16:59 -- app/version.sh@25 -- # version=24.1.1 00:07:39.921 17:16:59 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:39.921 17:16:59 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:39.921 17:16:59 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:40.179 17:16:59 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:40.179 17:16:59 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:40.179 00:07:40.179 real 0m0.256s 00:07:40.179 user 0m0.152s 00:07:40.179 sys 0m0.148s 00:07:40.179 17:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.179 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.179 ************************************ 00:07:40.179 END TEST version 00:07:40.179 ************************************ 00:07:40.179 17:16:59 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@191 -- # uname -s 00:07:40.179 17:16:59 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:40.179 17:16:59 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:40.179 17:16:59 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:40.179 17:16:59 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:40.179 17:16:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.179 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.179 17:16:59 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:40.179 17:16:59 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:07:40.179 17:16:59 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:40.179 17:16:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.179 17:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.179 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.179 ************************************ 00:07:40.179 START TEST nvmf_rdma 00:07:40.179 ************************************ 00:07:40.179 17:16:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:40.179 * Looking for test storage... 00:07:40.179 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:40.179 17:16:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.179 17:16:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.179 17:16:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.438 17:16:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.438 17:16:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.438 17:16:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.438 17:16:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.438 17:16:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.438 17:16:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.438 17:16:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.438 17:16:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.438 17:16:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.438 17:16:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.438 17:16:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.438 17:16:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.438 17:16:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.438 17:16:59 -- scripts/common.sh@344 -- # : 1 00:07:40.438 17:16:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.438 17:16:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.438 17:16:59 -- scripts/common.sh@364 -- # decimal 1 00:07:40.438 17:16:59 -- scripts/common.sh@352 -- # local d=1 00:07:40.438 17:16:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.438 17:16:59 -- scripts/common.sh@354 -- # echo 1 00:07:40.438 17:16:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.438 17:16:59 -- scripts/common.sh@365 -- # decimal 2 00:07:40.438 17:16:59 -- scripts/common.sh@352 -- # local d=2 00:07:40.438 17:16:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.438 17:16:59 -- scripts/common.sh@354 -- # echo 2 00:07:40.438 17:16:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.438 17:16:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.438 17:16:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.438 17:16:59 -- scripts/common.sh@367 -- # return 0 00:07:40.438 17:16:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.438 17:16:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.438 --rc genhtml_branch_coverage=1 00:07:40.438 --rc genhtml_function_coverage=1 00:07:40.438 --rc genhtml_legend=1 00:07:40.438 --rc geninfo_all_blocks=1 00:07:40.438 --rc geninfo_unexecuted_blocks=1 00:07:40.438 00:07:40.438 ' 00:07:40.438 17:16:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.438 --rc genhtml_branch_coverage=1 00:07:40.438 --rc genhtml_function_coverage=1 00:07:40.438 --rc genhtml_legend=1 00:07:40.438 --rc geninfo_all_blocks=1 00:07:40.438 --rc geninfo_unexecuted_blocks=1 00:07:40.438 00:07:40.438 ' 00:07:40.438 17:16:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.438 --rc genhtml_branch_coverage=1 00:07:40.438 --rc genhtml_function_coverage=1 00:07:40.438 --rc genhtml_legend=1 00:07:40.438 --rc geninfo_all_blocks=1 00:07:40.438 --rc geninfo_unexecuted_blocks=1 00:07:40.438 00:07:40.438 ' 00:07:40.438 17:16:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.438 --rc genhtml_branch_coverage=1 00:07:40.438 --rc genhtml_function_coverage=1 00:07:40.438 --rc genhtml_legend=1 00:07:40.438 --rc geninfo_all_blocks=1 00:07:40.438 --rc geninfo_unexecuted_blocks=1 00:07:40.438 00:07:40.438 ' 00:07:40.438 17:16:59 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:40.438 17:16:59 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.438 17:16:59 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.438 17:16:59 -- nvmf/common.sh@7 -- # uname -s 00:07:40.438 17:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.438 17:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.438 17:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.438 17:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.438 17:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.438 17:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.438 17:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.438 17:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.438 17:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.438 17:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.438 17:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:40.438 17:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:40.438 17:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.438 17:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.438 17:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.438 17:17:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.438 17:17:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.438 17:17:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.438 17:17:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.439 17:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.439 17:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.439 17:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.439 17:17:00 -- paths/export.sh@5 -- # export PATH 00:07:40.439 17:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.439 17:17:00 -- nvmf/common.sh@46 -- # : 0 00:07:40.439 17:17:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.439 17:17:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.439 17:17:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.439 17:17:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.439 17:17:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.439 17:17:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.439 17:17:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.439 17:17:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.439 17:17:00 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.439 17:17:00 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:40.439 17:17:00 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:40.439 17:17:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.439 17:17:00 -- common/autotest_common.sh@10 -- # set +x 00:07:40.439 17:17:00 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:40.439 17:17:00 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:40.439 17:17:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.439 17:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.439 17:17:00 -- common/autotest_common.sh@10 -- # set +x 00:07:40.439 ************************************ 00:07:40.439 START TEST nvmf_example 00:07:40.439 ************************************ 00:07:40.439 17:17:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:40.439 * Looking for test storage... 00:07:40.439 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:40.439 17:17:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.439 17:17:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.439 17:17:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.439 17:17:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.439 17:17:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.439 17:17:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.699 17:17:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.699 17:17:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.699 17:17:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.699 17:17:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.699 17:17:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.699 17:17:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.699 17:17:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.699 17:17:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.699 17:17:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.699 17:17:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.699 17:17:00 -- scripts/common.sh@344 -- # : 1 00:07:40.699 17:17:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.699 17:17:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.699 17:17:00 -- scripts/common.sh@364 -- # decimal 1 00:07:40.699 17:17:00 -- scripts/common.sh@352 -- # local d=1 00:07:40.699 17:17:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.699 17:17:00 -- scripts/common.sh@354 -- # echo 1 00:07:40.699 17:17:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.699 17:17:00 -- scripts/common.sh@365 -- # decimal 2 00:07:40.699 17:17:00 -- scripts/common.sh@352 -- # local d=2 00:07:40.699 17:17:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.699 17:17:00 -- scripts/common.sh@354 -- # echo 2 00:07:40.699 17:17:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.699 17:17:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.699 17:17:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.699 17:17:00 -- scripts/common.sh@367 -- # return 0 00:07:40.699 17:17:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.699 17:17:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.699 --rc genhtml_branch_coverage=1 00:07:40.699 --rc genhtml_function_coverage=1 00:07:40.699 --rc genhtml_legend=1 00:07:40.699 --rc geninfo_all_blocks=1 00:07:40.699 --rc geninfo_unexecuted_blocks=1 00:07:40.699 00:07:40.699 ' 00:07:40.699 17:17:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.699 --rc genhtml_branch_coverage=1 00:07:40.699 --rc genhtml_function_coverage=1 00:07:40.699 --rc genhtml_legend=1 00:07:40.699 --rc geninfo_all_blocks=1 00:07:40.699 --rc geninfo_unexecuted_blocks=1 00:07:40.699 00:07:40.699 ' 00:07:40.699 17:17:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.699 --rc genhtml_branch_coverage=1 00:07:40.699 --rc genhtml_function_coverage=1 00:07:40.699 --rc genhtml_legend=1 00:07:40.699 --rc geninfo_all_blocks=1 00:07:40.699 --rc geninfo_unexecuted_blocks=1 00:07:40.699 00:07:40.699 ' 00:07:40.699 17:17:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.699 --rc genhtml_branch_coverage=1 00:07:40.699 --rc genhtml_function_coverage=1 00:07:40.699 --rc genhtml_legend=1 00:07:40.699 --rc geninfo_all_blocks=1 00:07:40.699 --rc geninfo_unexecuted_blocks=1 00:07:40.699 00:07:40.699 ' 00:07:40.699 17:17:00 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.699 17:17:00 -- nvmf/common.sh@7 -- # uname -s 00:07:40.699 17:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.699 17:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.699 17:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.699 17:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.699 17:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.699 17:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.699 17:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.699 17:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.699 17:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.699 17:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.699 17:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:40.699 17:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:40.699 17:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.699 17:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.699 17:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.699 17:17:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.699 17:17:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.699 17:17:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.699 17:17:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.699 17:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.699 17:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.699 17:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.699 17:17:00 -- paths/export.sh@5 -- # export PATH 00:07:40.699 17:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.699 17:17:00 -- nvmf/common.sh@46 -- # : 0 00:07:40.699 17:17:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.699 17:17:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.699 17:17:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.699 17:17:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.699 17:17:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.699 17:17:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.699 17:17:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.699 17:17:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.699 17:17:00 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:40.699 17:17:00 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:40.699 17:17:00 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:40.699 17:17:00 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:40.699 17:17:00 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:40.699 17:17:00 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:40.699 17:17:00 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:40.699 17:17:00 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:40.699 17:17:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.699 17:17:00 -- common/autotest_common.sh@10 -- # set +x 00:07:40.700 17:17:00 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:40.700 17:17:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:40.700 17:17:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.700 17:17:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:40.700 17:17:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:40.700 17:17:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:40.700 17:17:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.700 17:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.700 17:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.700 17:17:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:40.700 17:17:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:40.700 17:17:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:40.700 17:17:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.361 17:17:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:47.361 17:17:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:47.361 17:17:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:47.361 17:17:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:47.361 17:17:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:47.361 17:17:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:47.361 17:17:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:47.361 17:17:06 -- nvmf/common.sh@294 -- # net_devs=() 00:07:47.361 17:17:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:47.361 17:17:06 -- nvmf/common.sh@295 -- # e810=() 00:07:47.361 17:17:06 -- nvmf/common.sh@295 -- # local -ga e810 00:07:47.361 17:17:06 -- nvmf/common.sh@296 -- # x722=() 00:07:47.361 17:17:06 -- nvmf/common.sh@296 -- # local -ga x722 00:07:47.361 17:17:06 -- nvmf/common.sh@297 -- # mlx=() 00:07:47.361 17:17:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:47.361 17:17:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.361 17:17:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:47.361 17:17:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:47.361 17:17:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:47.361 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:47.361 17:17:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:47.361 17:17:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:47.361 17:17:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:47.361 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:47.361 17:17:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:47.361 17:17:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:47.361 17:17:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:47.361 17:17:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.361 17:17:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:47.361 17:17:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.361 17:17:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:47.361 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:47.361 17:17:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:47.361 17:17:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.361 17:17:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:47.361 17:17:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.361 17:17:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:47.361 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:47.361 17:17:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.361 17:17:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:47.361 17:17:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:47.361 17:17:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:47.361 17:17:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:47.362 17:17:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:47.362 17:17:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:47.362 17:17:06 -- nvmf/common.sh@57 -- # uname 00:07:47.362 17:17:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:47.362 17:17:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:47.362 17:17:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:47.362 17:17:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:47.362 17:17:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:47.362 17:17:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:47.362 17:17:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:47.362 17:17:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:47.362 17:17:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:47.362 17:17:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:47.362 17:17:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:47.362 17:17:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:47.362 17:17:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:47.362 17:17:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:47.362 17:17:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:47.362 17:17:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:47.362 17:17:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@104 -- # continue 2 00:07:47.362 17:17:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:47.362 17:17:07 -- nvmf/common.sh@104 -- # continue 2 00:07:47.362 17:17:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:47.362 17:17:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:47.362 17:17:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:47.362 17:17:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:47.362 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:47.362 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:47.362 altname enp217s0f0np0 00:07:47.362 altname ens818f0np0 00:07:47.362 inet 192.168.100.8/24 scope global mlx_0_0 00:07:47.362 valid_lft forever preferred_lft forever 00:07:47.362 17:17:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:47.362 17:17:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:47.362 17:17:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:47.362 17:17:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:47.362 17:17:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:47.362 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:47.362 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:47.362 altname enp217s0f1np1 00:07:47.362 altname ens818f1np1 00:07:47.362 inet 192.168.100.9/24 scope global mlx_0_1 00:07:47.362 valid_lft forever preferred_lft forever 00:07:47.362 17:17:07 -- nvmf/common.sh@410 -- # return 0 00:07:47.362 17:17:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:47.362 17:17:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:47.362 17:17:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:47.362 17:17:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:47.362 17:17:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:47.362 17:17:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:47.362 17:17:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:47.362 17:17:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:47.362 17:17:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:47.362 17:17:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@104 -- # continue 2 00:07:47.362 17:17:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.362 17:17:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:47.362 17:17:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:47.362 17:17:07 -- nvmf/common.sh@104 -- # continue 2 00:07:47.362 17:17:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:47.362 17:17:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:47.362 17:17:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:47.363 17:17:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:47.363 17:17:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:47.363 17:17:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:47.363 17:17:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:47.363 17:17:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:47.363 17:17:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:47.363 17:17:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:47.363 17:17:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:47.363 17:17:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:47.363 192.168.100.9' 00:07:47.363 17:17:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:47.363 192.168.100.9' 00:07:47.363 17:17:07 -- nvmf/common.sh@445 -- # head -n 1 00:07:47.363 17:17:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:47.363 17:17:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:47.363 192.168.100.9' 00:07:47.363 17:17:07 -- nvmf/common.sh@446 -- # tail -n +2 00:07:47.363 17:17:07 -- nvmf/common.sh@446 -- # head -n 1 00:07:47.363 17:17:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:47.363 17:17:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:47.627 17:17:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:47.627 17:17:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:47.627 17:17:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:47.628 17:17:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:47.628 17:17:07 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:47.628 17:17:07 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:47.628 17:17:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.628 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.628 17:17:07 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:47.628 17:17:07 -- target/nvmf_example.sh@34 -- # nvmfpid=2550746 00:07:47.628 17:17:07 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:47.628 17:17:07 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.628 17:17:07 -- target/nvmf_example.sh@36 -- # waitforlisten 2550746 00:07:47.628 17:17:07 -- common/autotest_common.sh@829 -- # '[' -z 2550746 ']' 00:07:47.628 17:17:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.628 17:17:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.628 17:17:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.628 17:17:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.628 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.565 17:17:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.565 17:17:08 -- common/autotest_common.sh@862 -- # return 0 00:07:48.565 17:17:08 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:48.565 17:17:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:48.565 17:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.565 17:17:08 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:48.565 17:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.565 17:17:08 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:48.565 17:17:08 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.565 17:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.565 17:17:08 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:48.565 17:17:08 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.565 17:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.565 17:17:08 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:48.565 17:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.565 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:07:48.565 17:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.565 17:17:08 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:48.565 17:17:08 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:48.823 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.040 Initializing NVMe Controllers 00:08:01.040 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.040 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:01.040 Initialization complete. Launching workers. 00:08:01.040 ======================================================== 00:08:01.040 Latency(us) 00:08:01.040 Device Information : IOPS MiB/s Average min max 00:08:01.040 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25557.50 99.83 2504.49 594.69 14047.52 00:08:01.040 ======================================================== 00:08:01.040 Total : 25557.50 99.83 2504.49 594.69 14047.52 00:08:01.040 00:08:01.040 17:17:19 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:01.040 17:17:19 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:01.040 17:17:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:01.040 17:17:19 -- nvmf/common.sh@116 -- # sync 00:08:01.040 17:17:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:01.040 17:17:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:01.040 17:17:19 -- nvmf/common.sh@119 -- # set +e 00:08:01.040 17:17:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:01.040 17:17:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:01.040 rmmod nvme_rdma 00:08:01.040 rmmod nvme_fabrics 00:08:01.040 17:17:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:01.040 17:17:19 -- nvmf/common.sh@123 -- # set -e 00:08:01.040 17:17:19 -- nvmf/common.sh@124 -- # return 0 00:08:01.040 17:17:19 -- nvmf/common.sh@477 -- # '[' -n 2550746 ']' 00:08:01.040 17:17:19 -- nvmf/common.sh@478 -- # killprocess 2550746 00:08:01.040 17:17:19 -- common/autotest_common.sh@936 -- # '[' -z 2550746 ']' 00:08:01.040 17:17:19 -- common/autotest_common.sh@940 -- # kill -0 2550746 00:08:01.040 17:17:19 -- common/autotest_common.sh@941 -- # uname 00:08:01.040 17:17:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.040 17:17:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2550746 00:08:01.040 17:17:19 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:01.040 17:17:19 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:01.040 17:17:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2550746' 00:08:01.040 killing process with pid 2550746 00:08:01.040 17:17:19 -- common/autotest_common.sh@955 -- # kill 2550746 00:08:01.040 17:17:19 -- common/autotest_common.sh@960 -- # wait 2550746 00:08:01.040 nvmf threads initialize successfully 00:08:01.040 bdev subsystem init successfully 00:08:01.040 created a nvmf target service 00:08:01.040 create targets's poll groups done 00:08:01.040 all subsystems of target started 00:08:01.040 nvmf target is running 00:08:01.040 all subsystems of target stopped 00:08:01.040 destroy targets's poll groups done 00:08:01.040 destroyed the nvmf target service 00:08:01.040 bdev subsystem finish successfully 00:08:01.040 nvmf threads destroy successfully 00:08:01.040 17:17:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:01.040 17:17:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:01.040 17:17:19 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:01.040 17:17:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.040 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 00:08:01.040 real 0m19.908s 00:08:01.040 user 0m52.346s 00:08:01.040 sys 0m5.759s 00:08:01.040 17:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.040 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 ************************************ 00:08:01.040 END TEST nvmf_example 00:08:01.040 ************************************ 00:08:01.040 17:17:19 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:01.040 17:17:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:01.040 17:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.040 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 ************************************ 00:08:01.040 START TEST nvmf_filesystem 00:08:01.040 ************************************ 00:08:01.040 17:17:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:01.040 * Looking for test storage... 00:08:01.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.040 17:17:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.040 17:17:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.040 17:17:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.040 17:17:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.040 17:17:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.040 17:17:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.040 17:17:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.040 17:17:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.040 17:17:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.040 17:17:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.040 17:17:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.040 17:17:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.040 17:17:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.040 17:17:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.040 17:17:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.040 17:17:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.040 17:17:20 -- scripts/common.sh@344 -- # : 1 00:08:01.040 17:17:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.040 17:17:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.040 17:17:20 -- scripts/common.sh@364 -- # decimal 1 00:08:01.040 17:17:20 -- scripts/common.sh@352 -- # local d=1 00:08:01.040 17:17:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.040 17:17:20 -- scripts/common.sh@354 -- # echo 1 00:08:01.040 17:17:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.040 17:17:20 -- scripts/common.sh@365 -- # decimal 2 00:08:01.040 17:17:20 -- scripts/common.sh@352 -- # local d=2 00:08:01.040 17:17:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.040 17:17:20 -- scripts/common.sh@354 -- # echo 2 00:08:01.040 17:17:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.040 17:17:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.040 17:17:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.040 17:17:20 -- scripts/common.sh@367 -- # return 0 00:08:01.040 17:17:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.040 17:17:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.040 --rc genhtml_branch_coverage=1 00:08:01.040 --rc genhtml_function_coverage=1 00:08:01.040 --rc genhtml_legend=1 00:08:01.040 --rc geninfo_all_blocks=1 00:08:01.040 --rc geninfo_unexecuted_blocks=1 00:08:01.040 00:08:01.040 ' 00:08:01.040 17:17:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.040 --rc genhtml_branch_coverage=1 00:08:01.040 --rc genhtml_function_coverage=1 00:08:01.040 --rc genhtml_legend=1 00:08:01.040 --rc geninfo_all_blocks=1 00:08:01.040 --rc geninfo_unexecuted_blocks=1 00:08:01.040 00:08:01.040 ' 00:08:01.040 17:17:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.040 --rc genhtml_branch_coverage=1 00:08:01.040 --rc genhtml_function_coverage=1 00:08:01.040 --rc genhtml_legend=1 00:08:01.040 --rc geninfo_all_blocks=1 00:08:01.040 --rc geninfo_unexecuted_blocks=1 00:08:01.040 00:08:01.040 ' 00:08:01.040 17:17:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.040 --rc genhtml_branch_coverage=1 00:08:01.040 --rc genhtml_function_coverage=1 00:08:01.040 --rc genhtml_legend=1 00:08:01.040 --rc geninfo_all_blocks=1 00:08:01.040 --rc geninfo_unexecuted_blocks=1 00:08:01.040 00:08:01.040 ' 00:08:01.040 17:17:20 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:01.041 17:17:20 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:01.041 17:17:20 -- common/autotest_common.sh@34 -- # set -e 00:08:01.041 17:17:20 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:01.041 17:17:20 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:01.041 17:17:20 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:01.041 17:17:20 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:01.041 17:17:20 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:01.041 17:17:20 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:01.041 17:17:20 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:01.041 17:17:20 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:01.041 17:17:20 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:01.041 17:17:20 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:01.041 17:17:20 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:01.041 17:17:20 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:01.041 17:17:20 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:01.041 17:17:20 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:01.041 17:17:20 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:01.041 17:17:20 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:01.041 17:17:20 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:01.041 17:17:20 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:01.041 17:17:20 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:01.041 17:17:20 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:01.041 17:17:20 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:01.041 17:17:20 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:01.041 17:17:20 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:01.041 17:17:20 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:01.041 17:17:20 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:01.041 17:17:20 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:01.041 17:17:20 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:01.041 17:17:20 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:01.041 17:17:20 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:01.041 17:17:20 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:01.041 17:17:20 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:01.041 17:17:20 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:01.041 17:17:20 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:01.041 17:17:20 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:01.041 17:17:20 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:01.041 17:17:20 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:01.041 17:17:20 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:01.041 17:17:20 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:01.041 17:17:20 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:01.041 17:17:20 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:01.041 17:17:20 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:01.041 17:17:20 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:01.041 17:17:20 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:01.041 17:17:20 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:01.041 17:17:20 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:01.041 17:17:20 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:01.041 17:17:20 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:01.041 17:17:20 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:01.041 17:17:20 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:01.041 17:17:20 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:01.041 17:17:20 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:01.041 17:17:20 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:01.041 17:17:20 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:01.041 17:17:20 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:01.041 17:17:20 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:01.041 17:17:20 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:01.041 17:17:20 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:01.041 17:17:20 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:01.041 17:17:20 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:01.041 17:17:20 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:01.041 17:17:20 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:01.041 17:17:20 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:01.041 17:17:20 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:01.041 17:17:20 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:01.041 17:17:20 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:01.041 17:17:20 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:01.041 17:17:20 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:01.041 17:17:20 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:01.041 17:17:20 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:01.041 17:17:20 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:01.041 17:17:20 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:01.041 17:17:20 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:01.041 17:17:20 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:01.041 17:17:20 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:01.041 17:17:20 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:01.041 17:17:20 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:01.041 17:17:20 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:01.041 17:17:20 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:01.041 17:17:20 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:01.041 17:17:20 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:01.041 17:17:20 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:01.041 17:17:20 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:01.041 17:17:20 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:01.041 17:17:20 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:01.041 17:17:20 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:01.041 17:17:20 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:01.041 17:17:20 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:01.041 17:17:20 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:01.041 17:17:20 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:01.041 17:17:20 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:01.041 17:17:20 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:01.041 17:17:20 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:01.041 17:17:20 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:01.041 #define SPDK_CONFIG_H 00:08:01.041 #define SPDK_CONFIG_APPS 1 00:08:01.041 #define SPDK_CONFIG_ARCH native 00:08:01.041 #undef SPDK_CONFIG_ASAN 00:08:01.041 #undef SPDK_CONFIG_AVAHI 00:08:01.041 #undef SPDK_CONFIG_CET 00:08:01.041 #define SPDK_CONFIG_COVERAGE 1 00:08:01.041 #define SPDK_CONFIG_CROSS_PREFIX 00:08:01.041 #undef SPDK_CONFIG_CRYPTO 00:08:01.041 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:01.041 #undef SPDK_CONFIG_CUSTOMOCF 00:08:01.041 #undef SPDK_CONFIG_DAOS 00:08:01.041 #define SPDK_CONFIG_DAOS_DIR 00:08:01.041 #define SPDK_CONFIG_DEBUG 1 00:08:01.041 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:01.041 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:01.041 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:01.041 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:01.041 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:01.041 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:01.041 #define SPDK_CONFIG_EXAMPLES 1 00:08:01.041 #undef SPDK_CONFIG_FC 00:08:01.041 #define SPDK_CONFIG_FC_PATH 00:08:01.041 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:01.041 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:01.041 #undef SPDK_CONFIG_FUSE 00:08:01.041 #undef SPDK_CONFIG_FUZZER 00:08:01.041 #define SPDK_CONFIG_FUZZER_LIB 00:08:01.041 #undef SPDK_CONFIG_GOLANG 00:08:01.041 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:01.041 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:01.041 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:01.041 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:01.041 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:01.041 #define SPDK_CONFIG_IDXD 1 00:08:01.041 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:01.041 #undef SPDK_CONFIG_IPSEC_MB 00:08:01.041 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:01.041 #define SPDK_CONFIG_ISAL 1 00:08:01.041 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:01.041 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:01.041 #define SPDK_CONFIG_LIBDIR 00:08:01.041 #undef SPDK_CONFIG_LTO 00:08:01.041 #define SPDK_CONFIG_MAX_LCORES 00:08:01.041 #define SPDK_CONFIG_NVME_CUSE 1 00:08:01.041 #undef SPDK_CONFIG_OCF 00:08:01.041 #define SPDK_CONFIG_OCF_PATH 00:08:01.041 #define SPDK_CONFIG_OPENSSL_PATH 00:08:01.041 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:01.041 #undef SPDK_CONFIG_PGO_USE 00:08:01.041 #define SPDK_CONFIG_PREFIX /usr/local 00:08:01.041 #undef SPDK_CONFIG_RAID5F 00:08:01.041 #undef SPDK_CONFIG_RBD 00:08:01.041 #define SPDK_CONFIG_RDMA 1 00:08:01.041 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:01.041 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:01.041 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:01.041 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:01.041 #define SPDK_CONFIG_SHARED 1 00:08:01.041 #undef SPDK_CONFIG_SMA 00:08:01.041 #define SPDK_CONFIG_TESTS 1 00:08:01.041 #undef SPDK_CONFIG_TSAN 00:08:01.041 #define SPDK_CONFIG_UBLK 1 00:08:01.041 #define SPDK_CONFIG_UBSAN 1 00:08:01.042 #undef SPDK_CONFIG_UNIT_TESTS 00:08:01.042 #undef SPDK_CONFIG_URING 00:08:01.042 #define SPDK_CONFIG_URING_PATH 00:08:01.042 #undef SPDK_CONFIG_URING_ZNS 00:08:01.042 #undef SPDK_CONFIG_USDT 00:08:01.042 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:01.042 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:01.042 #undef SPDK_CONFIG_VFIO_USER 00:08:01.042 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:01.042 #define SPDK_CONFIG_VHOST 1 00:08:01.042 #define SPDK_CONFIG_VIRTIO 1 00:08:01.042 #undef SPDK_CONFIG_VTUNE 00:08:01.042 #define SPDK_CONFIG_VTUNE_DIR 00:08:01.042 #define SPDK_CONFIG_WERROR 1 00:08:01.042 #define SPDK_CONFIG_WPDK_DIR 00:08:01.042 #undef SPDK_CONFIG_XNVME 00:08:01.042 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:01.042 17:17:20 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:01.042 17:17:20 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:01.042 17:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.042 17:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.042 17:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.042 17:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 17:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 17:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 17:17:20 -- paths/export.sh@5 -- # export PATH 00:08:01.042 17:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 17:17:20 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.042 17:17:20 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.042 17:17:20 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:01.042 17:17:20 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:01.042 17:17:20 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:01.042 17:17:20 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:01.042 17:17:20 -- pm/common@16 -- # TEST_TAG=N/A 00:08:01.042 17:17:20 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:01.042 17:17:20 -- common/autotest_common.sh@52 -- # : 1 00:08:01.042 17:17:20 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:01.042 17:17:20 -- common/autotest_common.sh@56 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:01.042 17:17:20 -- common/autotest_common.sh@58 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:01.042 17:17:20 -- common/autotest_common.sh@60 -- # : 1 00:08:01.042 17:17:20 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:01.042 17:17:20 -- common/autotest_common.sh@62 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:01.042 17:17:20 -- common/autotest_common.sh@64 -- # : 00:08:01.042 17:17:20 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:01.042 17:17:20 -- common/autotest_common.sh@66 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:01.042 17:17:20 -- common/autotest_common.sh@68 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:01.042 17:17:20 -- common/autotest_common.sh@70 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:01.042 17:17:20 -- common/autotest_common.sh@72 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:01.042 17:17:20 -- common/autotest_common.sh@74 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:01.042 17:17:20 -- common/autotest_common.sh@76 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:01.042 17:17:20 -- common/autotest_common.sh@78 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:01.042 17:17:20 -- common/autotest_common.sh@80 -- # : 1 00:08:01.042 17:17:20 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:01.042 17:17:20 -- common/autotest_common.sh@82 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:01.042 17:17:20 -- common/autotest_common.sh@84 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:01.042 17:17:20 -- common/autotest_common.sh@86 -- # : 1 00:08:01.042 17:17:20 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:01.042 17:17:20 -- common/autotest_common.sh@88 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:01.042 17:17:20 -- common/autotest_common.sh@90 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:01.042 17:17:20 -- common/autotest_common.sh@92 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:01.042 17:17:20 -- common/autotest_common.sh@94 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:01.042 17:17:20 -- common/autotest_common.sh@96 -- # : rdma 00:08:01.042 17:17:20 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:01.042 17:17:20 -- common/autotest_common.sh@98 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:01.042 17:17:20 -- common/autotest_common.sh@100 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:01.042 17:17:20 -- common/autotest_common.sh@102 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:01.042 17:17:20 -- common/autotest_common.sh@104 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:01.042 17:17:20 -- common/autotest_common.sh@106 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:01.042 17:17:20 -- common/autotest_common.sh@108 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:01.042 17:17:20 -- common/autotest_common.sh@110 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:01.042 17:17:20 -- common/autotest_common.sh@112 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:01.042 17:17:20 -- common/autotest_common.sh@114 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:01.042 17:17:20 -- common/autotest_common.sh@116 -- # : 1 00:08:01.042 17:17:20 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:01.042 17:17:20 -- common/autotest_common.sh@118 -- # : 00:08:01.042 17:17:20 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:01.042 17:17:20 -- common/autotest_common.sh@120 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:01.042 17:17:20 -- common/autotest_common.sh@122 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:01.042 17:17:20 -- common/autotest_common.sh@124 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:01.042 17:17:20 -- common/autotest_common.sh@126 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:01.042 17:17:20 -- common/autotest_common.sh@128 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:01.042 17:17:20 -- common/autotest_common.sh@130 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:01.042 17:17:20 -- common/autotest_common.sh@132 -- # : 00:08:01.042 17:17:20 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:01.042 17:17:20 -- common/autotest_common.sh@134 -- # : true 00:08:01.042 17:17:20 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:01.042 17:17:20 -- common/autotest_common.sh@136 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:01.042 17:17:20 -- common/autotest_common.sh@138 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:01.042 17:17:20 -- common/autotest_common.sh@140 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:01.042 17:17:20 -- common/autotest_common.sh@142 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:01.042 17:17:20 -- common/autotest_common.sh@144 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:01.042 17:17:20 -- common/autotest_common.sh@146 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:01.042 17:17:20 -- common/autotest_common.sh@148 -- # : mlx5 00:08:01.042 17:17:20 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:01.042 17:17:20 -- common/autotest_common.sh@150 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:01.042 17:17:20 -- common/autotest_common.sh@152 -- # : 0 00:08:01.042 17:17:20 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:01.042 17:17:20 -- common/autotest_common.sh@154 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:01.043 17:17:20 -- common/autotest_common.sh@156 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:01.043 17:17:20 -- common/autotest_common.sh@158 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:01.043 17:17:20 -- common/autotest_common.sh@160 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:01.043 17:17:20 -- common/autotest_common.sh@163 -- # : 00:08:01.043 17:17:20 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:01.043 17:17:20 -- common/autotest_common.sh@165 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:01.043 17:17:20 -- common/autotest_common.sh@167 -- # : 0 00:08:01.043 17:17:20 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:01.043 17:17:20 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.043 17:17:20 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:01.043 17:17:20 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:01.043 17:17:20 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:01.043 17:17:20 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:01.043 17:17:20 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.043 17:17:20 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.043 17:17:20 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.043 17:17:20 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.043 17:17:20 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:01.043 17:17:20 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:01.043 17:17:20 -- common/autotest_common.sh@196 -- # cat 00:08:01.043 17:17:20 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:01.043 17:17:20 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.043 17:17:20 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.043 17:17:20 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.043 17:17:20 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.043 17:17:20 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:01.043 17:17:20 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:01.043 17:17:20 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:01.043 17:17:20 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:01.043 17:17:20 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:01.043 17:17:20 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:01.043 17:17:20 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.043 17:17:20 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.043 17:17:20 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.043 17:17:20 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.043 17:17:20 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.043 17:17:20 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.043 17:17:20 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:01.043 17:17:20 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:01.043 17:17:20 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:01.043 17:17:20 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:01.043 17:17:20 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:01.043 17:17:20 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:01.043 17:17:20 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:01.043 17:17:20 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:01.043 17:17:20 -- common/autotest_common.sh@259 -- # valgrind= 00:08:01.043 17:17:20 -- common/autotest_common.sh@265 -- # uname -s 00:08:01.043 17:17:20 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:01.043 17:17:20 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:01.043 17:17:20 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:01.043 17:17:20 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:01.043 17:17:20 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:01.043 17:17:20 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:01.043 17:17:20 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:01.043 17:17:20 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:01.043 17:17:20 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:01.043 17:17:20 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:01.043 17:17:20 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:01.043 17:17:20 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:01.043 17:17:20 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:01.043 17:17:20 -- common/autotest_common.sh@319 -- # [[ -z 2552998 ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@319 -- # kill -0 2552998 00:08:01.043 17:17:20 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:01.043 17:17:20 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:01.043 17:17:20 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:01.043 17:17:20 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:01.043 17:17:20 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:01.043 17:17:20 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:01.043 17:17:20 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:01.043 17:17:20 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.DFZl8J 00:08:01.043 17:17:20 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:01.043 17:17:20 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:01.043 17:17:20 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DFZl8J/tests/target /tmp/spdk.DFZl8J 00:08:01.043 17:17:20 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:01.043 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.043 17:17:20 -- common/autotest_common.sh@328 -- # df -T 00:08:01.043 17:17:20 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:01.043 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:01.043 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:01.043 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:01.043 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:01.043 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:01.043 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=55137030144 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730615296 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=6593585152 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=30816886784 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865305600 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=48418816 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336685056 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346126336 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=9441280 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864072704 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865309696 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=1236992 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173048832 00:08:01.044 17:17:20 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173061120 00:08:01.044 17:17:20 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:01.044 17:17:20 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:01.044 17:17:20 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:01.044 * Looking for test storage... 00:08:01.044 17:17:20 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:01.044 17:17:20 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:01.044 17:17:20 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:01.044 17:17:20 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.044 17:17:20 -- common/autotest_common.sh@373 -- # mount=/ 00:08:01.044 17:17:20 -- common/autotest_common.sh@375 -- # target_space=55137030144 00:08:01.044 17:17:20 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:01.044 17:17:20 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:01.044 17:17:20 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@382 -- # new_size=8808177664 00:08:01.044 17:17:20 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:01.044 17:17:20 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.044 17:17:20 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.044 17:17:20 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:01.044 17:17:20 -- common/autotest_common.sh@390 -- # return 0 00:08:01.044 17:17:20 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:01.044 17:17:20 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:01.044 17:17:20 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:01.044 17:17:20 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1682 -- # true 00:08:01.044 17:17:20 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:01.044 17:17:20 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@27 -- # exec 00:08:01.044 17:17:20 -- common/autotest_common.sh@29 -- # exec 00:08:01.044 17:17:20 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:01.044 17:17:20 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:01.044 17:17:20 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:01.044 17:17:20 -- common/autotest_common.sh@18 -- # set -x 00:08:01.044 17:17:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.044 17:17:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.044 17:17:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.044 17:17:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.044 17:17:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.044 17:17:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.044 17:17:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.044 17:17:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.044 17:17:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.044 17:17:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.044 17:17:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.044 17:17:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.044 17:17:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.044 17:17:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.044 17:17:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.044 17:17:20 -- scripts/common.sh@344 -- # : 1 00:08:01.044 17:17:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.044 17:17:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.044 17:17:20 -- scripts/common.sh@364 -- # decimal 1 00:08:01.044 17:17:20 -- scripts/common.sh@352 -- # local d=1 00:08:01.044 17:17:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.044 17:17:20 -- scripts/common.sh@354 -- # echo 1 00:08:01.044 17:17:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.044 17:17:20 -- scripts/common.sh@365 -- # decimal 2 00:08:01.044 17:17:20 -- scripts/common.sh@352 -- # local d=2 00:08:01.044 17:17:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.044 17:17:20 -- scripts/common.sh@354 -- # echo 2 00:08:01.044 17:17:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.044 17:17:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.044 17:17:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.044 17:17:20 -- scripts/common.sh@367 -- # return 0 00:08:01.044 17:17:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.044 --rc genhtml_branch_coverage=1 00:08:01.044 --rc genhtml_function_coverage=1 00:08:01.044 --rc genhtml_legend=1 00:08:01.044 --rc geninfo_all_blocks=1 00:08:01.044 --rc geninfo_unexecuted_blocks=1 00:08:01.044 00:08:01.044 ' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.044 --rc genhtml_branch_coverage=1 00:08:01.044 --rc genhtml_function_coverage=1 00:08:01.044 --rc genhtml_legend=1 00:08:01.044 --rc geninfo_all_blocks=1 00:08:01.044 --rc geninfo_unexecuted_blocks=1 00:08:01.044 00:08:01.044 ' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.044 --rc genhtml_branch_coverage=1 00:08:01.044 --rc genhtml_function_coverage=1 00:08:01.044 --rc genhtml_legend=1 00:08:01.044 --rc geninfo_all_blocks=1 00:08:01.044 --rc geninfo_unexecuted_blocks=1 00:08:01.044 00:08:01.044 ' 00:08:01.044 17:17:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.044 --rc genhtml_branch_coverage=1 00:08:01.044 --rc genhtml_function_coverage=1 00:08:01.044 --rc genhtml_legend=1 00:08:01.045 --rc geninfo_all_blocks=1 00:08:01.045 --rc geninfo_unexecuted_blocks=1 00:08:01.045 00:08:01.045 ' 00:08:01.045 17:17:20 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.045 17:17:20 -- nvmf/common.sh@7 -- # uname -s 00:08:01.045 17:17:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.045 17:17:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.045 17:17:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.045 17:17:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.045 17:17:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.045 17:17:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.045 17:17:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.045 17:17:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.045 17:17:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.045 17:17:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.045 17:17:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:01.045 17:17:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:01.045 17:17:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.045 17:17:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.045 17:17:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.045 17:17:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:01.045 17:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.045 17:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.045 17:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.045 17:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.045 17:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.045 17:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.045 17:17:20 -- paths/export.sh@5 -- # export PATH 00:08:01.045 17:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.045 17:17:20 -- nvmf/common.sh@46 -- # : 0 00:08:01.045 17:17:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:01.045 17:17:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:01.045 17:17:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:01.045 17:17:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.045 17:17:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.045 17:17:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:01.045 17:17:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:01.045 17:17:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:01.045 17:17:20 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:01.045 17:17:20 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:01.045 17:17:20 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:01.045 17:17:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:01.045 17:17:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.045 17:17:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:01.045 17:17:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:01.045 17:17:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:01.045 17:17:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.045 17:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.045 17:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.045 17:17:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:01.045 17:17:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:01.045 17:17:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:01.045 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:07.617 17:17:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.617 17:17:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:07.617 17:17:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:07.617 17:17:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:07.617 17:17:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:07.617 17:17:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:07.617 17:17:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:07.617 17:17:26 -- nvmf/common.sh@294 -- # net_devs=() 00:08:07.617 17:17:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:07.617 17:17:26 -- nvmf/common.sh@295 -- # e810=() 00:08:07.617 17:17:26 -- nvmf/common.sh@295 -- # local -ga e810 00:08:07.617 17:17:26 -- nvmf/common.sh@296 -- # x722=() 00:08:07.617 17:17:26 -- nvmf/common.sh@296 -- # local -ga x722 00:08:07.617 17:17:26 -- nvmf/common.sh@297 -- # mlx=() 00:08:07.617 17:17:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:07.617 17:17:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.617 17:17:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:07.617 17:17:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:07.617 17:17:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:07.617 17:17:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:07.617 17:17:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:07.617 17:17:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.617 17:17:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:07.617 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:07.617 17:17:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.617 17:17:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.617 17:17:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:07.617 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:07.617 17:17:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.617 17:17:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:07.617 17:17:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:07.617 17:17:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.617 17:17:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.617 17:17:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.617 17:17:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.617 17:17:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:07.617 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:07.617 17:17:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.617 17:17:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.617 17:17:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.617 17:17:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.618 17:17:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.618 17:17:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:07.618 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.618 17:17:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:07.618 17:17:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:07.618 17:17:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:07.618 17:17:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:07.618 17:17:26 -- nvmf/common.sh@57 -- # uname 00:08:07.618 17:17:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:07.618 17:17:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:07.618 17:17:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:07.618 17:17:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:07.618 17:17:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:07.618 17:17:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:07.618 17:17:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:07.618 17:17:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:07.618 17:17:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:07.618 17:17:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:07.618 17:17:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:07.618 17:17:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.618 17:17:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:07.618 17:17:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:07.618 17:17:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.618 17:17:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:07.618 17:17:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@104 -- # continue 2 00:08:07.618 17:17:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@104 -- # continue 2 00:08:07.618 17:17:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:07.618 17:17:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:07.618 17:17:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:07.618 17:17:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:07.618 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.618 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:07.618 altname enp217s0f0np0 00:08:07.618 altname ens818f0np0 00:08:07.618 inet 192.168.100.8/24 scope global mlx_0_0 00:08:07.618 valid_lft forever preferred_lft forever 00:08:07.618 17:17:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:07.618 17:17:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:07.618 17:17:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:07.618 17:17:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:07.618 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.618 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:07.618 altname enp217s0f1np1 00:08:07.618 altname ens818f1np1 00:08:07.618 inet 192.168.100.9/24 scope global mlx_0_1 00:08:07.618 valid_lft forever preferred_lft forever 00:08:07.618 17:17:26 -- nvmf/common.sh@410 -- # return 0 00:08:07.618 17:17:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.618 17:17:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:07.618 17:17:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:07.618 17:17:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:07.618 17:17:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.618 17:17:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:07.618 17:17:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:07.618 17:17:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.618 17:17:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:07.618 17:17:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@104 -- # continue 2 00:08:07.618 17:17:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.618 17:17:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.618 17:17:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@104 -- # continue 2 00:08:07.618 17:17:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:07.618 17:17:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:07.618 17:17:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:07.618 17:17:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:07.618 17:17:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:07.618 17:17:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:07.618 192.168.100.9' 00:08:07.618 17:17:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:07.618 192.168.100.9' 00:08:07.618 17:17:26 -- nvmf/common.sh@445 -- # head -n 1 00:08:07.618 17:17:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:07.618 17:17:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:07.618 192.168.100.9' 00:08:07.618 17:17:26 -- nvmf/common.sh@446 -- # tail -n +2 00:08:07.618 17:17:26 -- nvmf/common.sh@446 -- # head -n 1 00:08:07.618 17:17:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:07.618 17:17:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:07.618 17:17:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:07.618 17:17:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:07.618 17:17:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:07.618 17:17:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:07.618 17:17:26 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:07.618 17:17:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.618 17:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.618 17:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:07.618 ************************************ 00:08:07.618 START TEST nvmf_filesystem_no_in_capsule 00:08:07.618 ************************************ 00:08:07.618 17:17:26 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:07.618 17:17:26 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:07.618 17:17:26 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:07.618 17:17:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.618 17:17:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.618 17:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:07.618 17:17:26 -- nvmf/common.sh@469 -- # nvmfpid=2556432 00:08:07.618 17:17:26 -- nvmf/common.sh@470 -- # waitforlisten 2556432 00:08:07.618 17:17:26 -- common/autotest_common.sh@829 -- # '[' -z 2556432 ']' 00:08:07.618 17:17:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.618 17:17:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.618 17:17:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.618 17:17:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.618 17:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:07.618 17:17:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.618 [2024-11-09 17:17:27.012961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.618 [2024-11-09 17:17:27.013014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.618 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.618 [2024-11-09 17:17:27.082436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.618 [2024-11-09 17:17:27.155768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.618 [2024-11-09 17:17:27.155898] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.618 [2024-11-09 17:17:27.155908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.618 [2024-11-09 17:17:27.155919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.618 [2024-11-09 17:17:27.155970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.618 [2024-11-09 17:17:27.156062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.618 [2024-11-09 17:17:27.156128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.618 [2024-11-09 17:17:27.156130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.186 17:17:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.186 17:17:27 -- common/autotest_common.sh@862 -- # return 0 00:08:08.186 17:17:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.186 17:17:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.186 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.186 17:17:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.186 17:17:27 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:08.186 17:17:27 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:08.186 17:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.186 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.186 [2024-11-09 17:17:27.884785] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:08.186 [2024-11-09 17:17:27.906134] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x125d090/0x1261580) succeed. 00:08:08.186 [2024-11-09 17:17:27.915223] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125e680/0x12a2c20) succeed. 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.445 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.445 Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.445 17:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.445 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.445 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:08.445 17:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.445 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.445 [2024-11-09 17:17:28.163624] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:08.445 17:17:28 -- common/autotest_common.sh@1369 -- # local bs 00:08:08.445 17:17:28 -- common/autotest_common.sh@1370 -- # local nb 00:08:08.445 17:17:28 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:08.445 17:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.445 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.445 17:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.445 17:17:28 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:08.445 { 00:08:08.445 "name": "Malloc1", 00:08:08.445 "aliases": [ 00:08:08.445 "db2c8334-041d-47ab-9187-eeb8a4afaf62" 00:08:08.445 ], 00:08:08.445 "product_name": "Malloc disk", 00:08:08.445 "block_size": 512, 00:08:08.445 "num_blocks": 1048576, 00:08:08.445 "uuid": "db2c8334-041d-47ab-9187-eeb8a4afaf62", 00:08:08.445 "assigned_rate_limits": { 00:08:08.445 "rw_ios_per_sec": 0, 00:08:08.445 "rw_mbytes_per_sec": 0, 00:08:08.445 "r_mbytes_per_sec": 0, 00:08:08.445 "w_mbytes_per_sec": 0 00:08:08.445 }, 00:08:08.445 "claimed": true, 00:08:08.445 "claim_type": "exclusive_write", 00:08:08.445 "zoned": false, 00:08:08.445 "supported_io_types": { 00:08:08.445 "read": true, 00:08:08.445 "write": true, 00:08:08.445 "unmap": true, 00:08:08.445 "write_zeroes": true, 00:08:08.445 "flush": true, 00:08:08.445 "reset": true, 00:08:08.445 "compare": false, 00:08:08.445 "compare_and_write": false, 00:08:08.445 "abort": true, 00:08:08.445 "nvme_admin": false, 00:08:08.445 "nvme_io": false 00:08:08.445 }, 00:08:08.445 "memory_domains": [ 00:08:08.445 { 00:08:08.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.445 "dma_device_type": 2 00:08:08.445 } 00:08:08.445 ], 00:08:08.445 "driver_specific": {} 00:08:08.445 } 00:08:08.445 ]' 00:08:08.445 17:17:28 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:08.704 17:17:28 -- common/autotest_common.sh@1372 -- # bs=512 00:08:08.704 17:17:28 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:08.704 17:17:28 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:08.704 17:17:28 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:08.704 17:17:28 -- common/autotest_common.sh@1377 -- # echo 512 00:08:08.704 17:17:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:08.704 17:17:28 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:09.642 17:17:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.642 17:17:29 -- common/autotest_common.sh@1187 -- # local i=0 00:08:09.642 17:17:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.642 17:17:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:09.642 17:17:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:11.546 17:17:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:11.546 17:17:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:11.805 17:17:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.805 17:17:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:11.805 17:17:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.805 17:17:31 -- common/autotest_common.sh@1197 -- # return 0 00:08:11.805 17:17:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:11.805 17:17:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:11.805 17:17:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:11.805 17:17:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:11.805 17:17:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:11.805 17:17:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:11.805 17:17:31 -- setup/common.sh@80 -- # echo 536870912 00:08:11.805 17:17:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:11.805 17:17:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:11.805 17:17:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:11.805 17:17:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:11.805 17:17:31 -- target/filesystem.sh@69 -- # partprobe 00:08:11.805 17:17:31 -- target/filesystem.sh@70 -- # sleep 1 00:08:12.742 17:17:32 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:12.742 17:17:32 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:13.002 17:17:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.002 17:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.002 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:13.002 ************************************ 00:08:13.002 START TEST filesystem_ext4 00:08:13.002 ************************************ 00:08:13.002 17:17:32 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:13.002 17:17:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:13.002 17:17:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.002 17:17:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:13.002 17:17:32 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:13.002 17:17:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:13.002 17:17:32 -- common/autotest_common.sh@914 -- # local i=0 00:08:13.002 17:17:32 -- common/autotest_common.sh@915 -- # local force 00:08:13.002 17:17:32 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:13.002 17:17:32 -- common/autotest_common.sh@918 -- # force=-F 00:08:13.002 17:17:32 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:13.002 mke2fs 1.47.0 (5-Feb-2023) 00:08:13.002 Discarding device blocks: 0/522240 done 00:08:13.002 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:13.002 Filesystem UUID: ddf8afe3-b12b-41eb-9954-b8e8a187f2f1 00:08:13.002 Superblock backups stored on blocks: 00:08:13.002 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:13.002 00:08:13.002 Allocating group tables: 0/64 done 00:08:13.002 Writing inode tables: 0/64 done 00:08:13.002 Creating journal (8192 blocks): done 00:08:13.002 Writing superblocks and filesystem accounting information: 0/64 done 00:08:13.002 00:08:13.002 17:17:32 -- common/autotest_common.sh@931 -- # return 0 00:08:13.002 17:17:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.002 17:17:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.002 17:17:32 -- target/filesystem.sh@25 -- # sync 00:08:13.002 17:17:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.002 17:17:32 -- target/filesystem.sh@27 -- # sync 00:08:13.002 17:17:32 -- target/filesystem.sh@29 -- # i=0 00:08:13.002 17:17:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.002 17:17:32 -- target/filesystem.sh@37 -- # kill -0 2556432 00:08:13.002 17:17:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.002 17:17:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.002 17:17:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.002 17:17:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.002 00:08:13.002 real 0m0.199s 00:08:13.002 user 0m0.026s 00:08:13.002 sys 0m0.082s 00:08:13.002 17:17:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.002 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:13.002 ************************************ 00:08:13.002 END TEST filesystem_ext4 00:08:13.002 ************************************ 00:08:13.002 17:17:32 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:13.002 17:17:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.002 17:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.002 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:13.002 ************************************ 00:08:13.002 START TEST filesystem_btrfs 00:08:13.002 ************************************ 00:08:13.002 17:17:32 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:13.002 17:17:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:13.002 17:17:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.002 17:17:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:13.002 17:17:32 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:13.002 17:17:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:13.002 17:17:32 -- common/autotest_common.sh@914 -- # local i=0 00:08:13.002 17:17:32 -- common/autotest_common.sh@915 -- # local force 00:08:13.262 17:17:32 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:13.262 17:17:32 -- common/autotest_common.sh@920 -- # force=-f 00:08:13.262 17:17:32 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:13.262 btrfs-progs v6.8.1 00:08:13.262 See https://btrfs.readthedocs.io for more information. 00:08:13.262 00:08:13.262 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:13.262 NOTE: several default settings have changed in version 5.15, please make sure 00:08:13.262 this does not affect your deployments: 00:08:13.262 - DUP for metadata (-m dup) 00:08:13.262 - enabled no-holes (-O no-holes) 00:08:13.262 - enabled free-space-tree (-R free-space-tree) 00:08:13.262 00:08:13.262 Label: (null) 00:08:13.262 UUID: b376972f-ab78-4c2f-a141-82df1cb87569 00:08:13.262 Node size: 16384 00:08:13.262 Sector size: 4096 (CPU page size: 4096) 00:08:13.262 Filesystem size: 510.00MiB 00:08:13.262 Block group profiles: 00:08:13.262 Data: single 8.00MiB 00:08:13.262 Metadata: DUP 32.00MiB 00:08:13.262 System: DUP 8.00MiB 00:08:13.262 SSD detected: yes 00:08:13.262 Zoned device: no 00:08:13.262 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:13.262 Checksum: crc32c 00:08:13.262 Number of devices: 1 00:08:13.262 Devices: 00:08:13.262 ID SIZE PATH 00:08:13.262 1 510.00MiB /dev/nvme0n1p1 00:08:13.262 00:08:13.262 17:17:32 -- common/autotest_common.sh@931 -- # return 0 00:08:13.262 17:17:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.262 17:17:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.262 17:17:32 -- target/filesystem.sh@25 -- # sync 00:08:13.262 17:17:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.262 17:17:32 -- target/filesystem.sh@27 -- # sync 00:08:13.262 17:17:32 -- target/filesystem.sh@29 -- # i=0 00:08:13.262 17:17:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.262 17:17:32 -- target/filesystem.sh@37 -- # kill -0 2556432 00:08:13.262 17:17:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.262 17:17:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.262 17:17:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.262 17:17:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.262 00:08:13.262 real 0m0.246s 00:08:13.262 user 0m0.039s 00:08:13.262 sys 0m0.120s 00:08:13.262 17:17:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.262 17:17:33 -- common/autotest_common.sh@10 -- # set +x 00:08:13.262 ************************************ 00:08:13.262 END TEST filesystem_btrfs 00:08:13.262 ************************************ 00:08:13.522 17:17:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:13.522 17:17:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.522 17:17:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.522 17:17:33 -- common/autotest_common.sh@10 -- # set +x 00:08:13.522 ************************************ 00:08:13.522 START TEST filesystem_xfs 00:08:13.522 ************************************ 00:08:13.522 17:17:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:13.522 17:17:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:13.522 17:17:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.522 17:17:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:13.522 17:17:33 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:13.522 17:17:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:13.522 17:17:33 -- common/autotest_common.sh@914 -- # local i=0 00:08:13.522 17:17:33 -- common/autotest_common.sh@915 -- # local force 00:08:13.522 17:17:33 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:13.522 17:17:33 -- common/autotest_common.sh@920 -- # force=-f 00:08:13.522 17:17:33 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:13.522 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:13.522 = sectsz=512 attr=2, projid32bit=1 00:08:13.522 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:13.522 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:13.522 data = bsize=4096 blocks=130560, imaxpct=25 00:08:13.522 = sunit=0 swidth=0 blks 00:08:13.522 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:13.522 log =internal log bsize=4096 blocks=16384, version=2 00:08:13.522 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:13.522 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.522 Discarding blocks...Done. 00:08:13.522 17:17:33 -- common/autotest_common.sh@931 -- # return 0 00:08:13.522 17:17:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.522 17:17:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.522 17:17:33 -- target/filesystem.sh@25 -- # sync 00:08:13.522 17:17:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.522 17:17:33 -- target/filesystem.sh@27 -- # sync 00:08:13.522 17:17:33 -- target/filesystem.sh@29 -- # i=0 00:08:13.522 17:17:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.522 17:17:33 -- target/filesystem.sh@37 -- # kill -0 2556432 00:08:13.522 17:17:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.522 17:17:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.522 17:17:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.522 17:17:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.522 00:08:13.522 real 0m0.206s 00:08:13.522 user 0m0.028s 00:08:13.522 sys 0m0.079s 00:08:13.522 17:17:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.522 17:17:33 -- common/autotest_common.sh@10 -- # set +x 00:08:13.522 ************************************ 00:08:13.522 END TEST filesystem_xfs 00:08:13.522 ************************************ 00:08:13.781 17:17:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.781 17:17:33 -- target/filesystem.sh@93 -- # sync 00:08:13.781 17:17:33 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:14.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.719 17:17:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:14.719 17:17:34 -- common/autotest_common.sh@1208 -- # local i=0 00:08:14.719 17:17:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:14.719 17:17:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.719 17:17:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:14.719 17:17:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.719 17:17:34 -- common/autotest_common.sh@1220 -- # return 0 00:08:14.719 17:17:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.719 17:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.719 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 17:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.719 17:17:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:14.719 17:17:34 -- target/filesystem.sh@101 -- # killprocess 2556432 00:08:14.719 17:17:34 -- common/autotest_common.sh@936 -- # '[' -z 2556432 ']' 00:08:14.719 17:17:34 -- common/autotest_common.sh@940 -- # kill -0 2556432 00:08:14.719 17:17:34 -- common/autotest_common.sh@941 -- # uname 00:08:14.719 17:17:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.719 17:17:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2556432 00:08:14.719 17:17:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.719 17:17:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.719 17:17:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2556432' 00:08:14.719 killing process with pid 2556432 00:08:14.719 17:17:34 -- common/autotest_common.sh@955 -- # kill 2556432 00:08:14.719 17:17:34 -- common/autotest_common.sh@960 -- # wait 2556432 00:08:15.288 17:17:34 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:15.288 00:08:15.288 real 0m7.880s 00:08:15.289 user 0m30.680s 00:08:15.289 sys 0m1.170s 00:08:15.289 17:17:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.289 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 ************************************ 00:08:15.289 END TEST nvmf_filesystem_no_in_capsule 00:08:15.289 ************************************ 00:08:15.289 17:17:34 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:15.289 17:17:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:15.289 17:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.289 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 ************************************ 00:08:15.289 START TEST nvmf_filesystem_in_capsule 00:08:15.289 ************************************ 00:08:15.289 17:17:34 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:15.289 17:17:34 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:15.289 17:17:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:15.289 17:17:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.289 17:17:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.289 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 17:17:34 -- nvmf/common.sh@469 -- # nvmfpid=2557994 00:08:15.289 17:17:34 -- nvmf/common.sh@470 -- # waitforlisten 2557994 00:08:15.289 17:17:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.289 17:17:34 -- common/autotest_common.sh@829 -- # '[' -z 2557994 ']' 00:08:15.289 17:17:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.289 17:17:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.289 17:17:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.289 17:17:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.289 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 [2024-11-09 17:17:34.947747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.289 [2024-11-09 17:17:34.947808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.289 [2024-11-09 17:17:35.019542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.549 [2024-11-09 17:17:35.091344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.549 [2024-11-09 17:17:35.091476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.549 [2024-11-09 17:17:35.091487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.549 [2024-11-09 17:17:35.091495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.549 [2024-11-09 17:17:35.091543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.549 [2024-11-09 17:17:35.091652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.549 [2024-11-09 17:17:35.091738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.549 [2024-11-09 17:17:35.091740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.117 17:17:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.117 17:17:35 -- common/autotest_common.sh@862 -- # return 0 00:08:16.117 17:17:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.117 17:17:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.117 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.117 17:17:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.117 17:17:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:16.117 17:17:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:16.117 17:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.117 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.117 [2024-11-09 17:17:35.846361] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cab090/0x1caf580) succeed. 00:08:16.117 [2024-11-09 17:17:35.855629] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cac680/0x1cf0c20) succeed. 00:08:16.376 17:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.377 17:17:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:16.377 17:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.377 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.377 Malloc1 00:08:16.377 17:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.377 17:17:36 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.377 17:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.377 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:16.377 17:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.377 17:17:36 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.377 17:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.377 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:16.377 17:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.377 17:17:36 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.377 17:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.377 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:16.377 [2024-11-09 17:17:36.124527] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.377 17:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.377 17:17:36 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:16.377 17:17:36 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:16.377 17:17:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:16.377 17:17:36 -- common/autotest_common.sh@1369 -- # local bs 00:08:16.377 17:17:36 -- common/autotest_common.sh@1370 -- # local nb 00:08:16.377 17:17:36 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:16.377 17:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.377 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:16.636 17:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.636 17:17:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:16.636 { 00:08:16.636 "name": "Malloc1", 00:08:16.636 "aliases": [ 00:08:16.636 "fb4e9a7a-6db7-4691-a335-8e1989877ca6" 00:08:16.636 ], 00:08:16.636 "product_name": "Malloc disk", 00:08:16.636 "block_size": 512, 00:08:16.636 "num_blocks": 1048576, 00:08:16.636 "uuid": "fb4e9a7a-6db7-4691-a335-8e1989877ca6", 00:08:16.636 "assigned_rate_limits": { 00:08:16.636 "rw_ios_per_sec": 0, 00:08:16.636 "rw_mbytes_per_sec": 0, 00:08:16.636 "r_mbytes_per_sec": 0, 00:08:16.636 "w_mbytes_per_sec": 0 00:08:16.636 }, 00:08:16.636 "claimed": true, 00:08:16.636 "claim_type": "exclusive_write", 00:08:16.636 "zoned": false, 00:08:16.636 "supported_io_types": { 00:08:16.636 "read": true, 00:08:16.636 "write": true, 00:08:16.636 "unmap": true, 00:08:16.636 "write_zeroes": true, 00:08:16.636 "flush": true, 00:08:16.636 "reset": true, 00:08:16.636 "compare": false, 00:08:16.636 "compare_and_write": false, 00:08:16.636 "abort": true, 00:08:16.636 "nvme_admin": false, 00:08:16.636 "nvme_io": false 00:08:16.636 }, 00:08:16.636 "memory_domains": [ 00:08:16.636 { 00:08:16.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.636 "dma_device_type": 2 00:08:16.636 } 00:08:16.636 ], 00:08:16.636 "driver_specific": {} 00:08:16.636 } 00:08:16.636 ]' 00:08:16.636 17:17:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:16.636 17:17:36 -- common/autotest_common.sh@1372 -- # bs=512 00:08:16.636 17:17:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:16.636 17:17:36 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:16.636 17:17:36 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:16.636 17:17:36 -- common/autotest_common.sh@1377 -- # echo 512 00:08:16.636 17:17:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:16.636 17:17:36 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:17.574 17:17:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.574 17:17:37 -- common/autotest_common.sh@1187 -- # local i=0 00:08:17.574 17:17:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.574 17:17:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:17.574 17:17:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:19.478 17:17:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:19.478 17:17:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:19.478 17:17:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.478 17:17:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:19.478 17:17:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.737 17:17:39 -- common/autotest_common.sh@1197 -- # return 0 00:08:19.737 17:17:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:19.737 17:17:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:19.737 17:17:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:19.737 17:17:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:19.737 17:17:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:19.737 17:17:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:19.737 17:17:39 -- setup/common.sh@80 -- # echo 536870912 00:08:19.737 17:17:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:19.737 17:17:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:19.737 17:17:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:19.737 17:17:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:19.737 17:17:39 -- target/filesystem.sh@69 -- # partprobe 00:08:19.737 17:17:39 -- target/filesystem.sh@70 -- # sleep 1 00:08:20.674 17:17:40 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:20.674 17:17:40 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:20.674 17:17:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.674 17:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.674 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.674 ************************************ 00:08:20.674 START TEST filesystem_in_capsule_ext4 00:08:20.674 ************************************ 00:08:20.674 17:17:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:20.674 17:17:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:20.674 17:17:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.674 17:17:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:20.674 17:17:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:20.674 17:17:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.933 17:17:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.933 17:17:40 -- common/autotest_common.sh@915 -- # local force 00:08:20.933 17:17:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:20.933 17:17:40 -- common/autotest_common.sh@918 -- # force=-F 00:08:20.933 17:17:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:20.933 mke2fs 1.47.0 (5-Feb-2023) 00:08:20.933 Discarding device blocks: 0/522240 done 00:08:20.933 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:20.933 Filesystem UUID: d5675ff8-b97c-4233-997f-3c791fae6a95 00:08:20.933 Superblock backups stored on blocks: 00:08:20.933 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:20.933 00:08:20.933 Allocating group tables: 0/64 done 00:08:20.933 Writing inode tables: 0/64 done 00:08:20.933 Creating journal (8192 blocks): done 00:08:20.933 Writing superblocks and filesystem accounting information: 0/64 done 00:08:20.933 00:08:20.933 17:17:40 -- common/autotest_common.sh@931 -- # return 0 00:08:20.933 17:17:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.933 17:17:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.933 17:17:40 -- target/filesystem.sh@25 -- # sync 00:08:20.933 17:17:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.933 17:17:40 -- target/filesystem.sh@27 -- # sync 00:08:20.933 17:17:40 -- target/filesystem.sh@29 -- # i=0 00:08:20.933 17:17:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.933 17:17:40 -- target/filesystem.sh@37 -- # kill -0 2557994 00:08:20.933 17:17:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.933 17:17:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.933 17:17:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.933 17:17:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.933 00:08:20.933 real 0m0.195s 00:08:20.933 user 0m0.031s 00:08:20.933 sys 0m0.073s 00:08:20.933 17:17:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.933 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.933 ************************************ 00:08:20.933 END TEST filesystem_in_capsule_ext4 00:08:20.933 ************************************ 00:08:20.933 17:17:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.933 17:17:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.933 17:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.933 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.933 ************************************ 00:08:20.933 START TEST filesystem_in_capsule_btrfs 00:08:20.933 ************************************ 00:08:20.933 17:17:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.933 17:17:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.933 17:17:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.933 17:17:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.933 17:17:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:20.933 17:17:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.933 17:17:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.933 17:17:40 -- common/autotest_common.sh@915 -- # local force 00:08:20.933 17:17:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:20.933 17:17:40 -- common/autotest_common.sh@920 -- # force=-f 00:08:20.933 17:17:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.192 btrfs-progs v6.8.1 00:08:21.192 See https://btrfs.readthedocs.io for more information. 00:08:21.192 00:08:21.192 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.192 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.192 this does not affect your deployments: 00:08:21.192 - DUP for metadata (-m dup) 00:08:21.193 - enabled no-holes (-O no-holes) 00:08:21.193 - enabled free-space-tree (-R free-space-tree) 00:08:21.193 00:08:21.193 Label: (null) 00:08:21.193 UUID: 0ab1bce4-808f-4ebb-914c-e9ea6d01228d 00:08:21.193 Node size: 16384 00:08:21.193 Sector size: 4096 (CPU page size: 4096) 00:08:21.193 Filesystem size: 510.00MiB 00:08:21.193 Block group profiles: 00:08:21.193 Data: single 8.00MiB 00:08:21.193 Metadata: DUP 32.00MiB 00:08:21.193 System: DUP 8.00MiB 00:08:21.193 SSD detected: yes 00:08:21.193 Zoned device: no 00:08:21.193 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.193 Checksum: crc32c 00:08:21.193 Number of devices: 1 00:08:21.193 Devices: 00:08:21.193 ID SIZE PATH 00:08:21.193 1 510.00MiB /dev/nvme0n1p1 00:08:21.193 00:08:21.193 17:17:40 -- common/autotest_common.sh@931 -- # return 0 00:08:21.193 17:17:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.193 17:17:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.193 17:17:40 -- target/filesystem.sh@25 -- # sync 00:08:21.193 17:17:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.193 17:17:40 -- target/filesystem.sh@27 -- # sync 00:08:21.193 17:17:40 -- target/filesystem.sh@29 -- # i=0 00:08:21.193 17:17:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.193 17:17:40 -- target/filesystem.sh@37 -- # kill -0 2557994 00:08:21.193 17:17:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.193 17:17:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.193 17:17:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.193 17:17:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.193 00:08:21.193 real 0m0.232s 00:08:21.193 user 0m0.027s 00:08:21.193 sys 0m0.114s 00:08:21.193 17:17:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.193 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.193 ************************************ 00:08:21.193 END TEST filesystem_in_capsule_btrfs 00:08:21.193 ************************************ 00:08:21.452 17:17:40 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.452 17:17:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:21.452 17:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.452 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.452 ************************************ 00:08:21.452 START TEST filesystem_in_capsule_xfs 00:08:21.452 ************************************ 00:08:21.452 17:17:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.452 17:17:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.452 17:17:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.452 17:17:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.452 17:17:40 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:21.452 17:17:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:21.452 17:17:40 -- common/autotest_common.sh@914 -- # local i=0 00:08:21.452 17:17:40 -- common/autotest_common.sh@915 -- # local force 00:08:21.452 17:17:40 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:21.452 17:17:40 -- common/autotest_common.sh@920 -- # force=-f 00:08:21.452 17:17:40 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.452 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.452 = sectsz=512 attr=2, projid32bit=1 00:08:21.452 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.452 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.452 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.452 = sunit=0 swidth=0 blks 00:08:21.452 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.452 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.452 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.452 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:21.452 Discarding blocks...Done. 00:08:21.452 17:17:41 -- common/autotest_common.sh@931 -- # return 0 00:08:21.452 17:17:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.452 17:17:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.452 17:17:41 -- target/filesystem.sh@25 -- # sync 00:08:21.452 17:17:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.452 17:17:41 -- target/filesystem.sh@27 -- # sync 00:08:21.452 17:17:41 -- target/filesystem.sh@29 -- # i=0 00:08:21.452 17:17:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.452 17:17:41 -- target/filesystem.sh@37 -- # kill -0 2557994 00:08:21.452 17:17:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.452 17:17:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.452 17:17:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.452 17:17:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.452 00:08:21.452 real 0m0.208s 00:08:21.452 user 0m0.025s 00:08:21.452 sys 0m0.081s 00:08:21.452 17:17:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.452 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:21.452 ************************************ 00:08:21.452 END TEST filesystem_in_capsule_xfs 00:08:21.452 ************************************ 00:08:21.452 17:17:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.711 17:17:41 -- target/filesystem.sh@93 -- # sync 00:08:21.711 17:17:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.758 17:17:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.758 17:17:42 -- common/autotest_common.sh@1208 -- # local i=0 00:08:22.758 17:17:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:22.758 17:17:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.758 17:17:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:22.758 17:17:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.758 17:17:42 -- common/autotest_common.sh@1220 -- # return 0 00:08:22.758 17:17:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.758 17:17:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.758 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:08:22.758 17:17:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.758 17:17:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.758 17:17:42 -- target/filesystem.sh@101 -- # killprocess 2557994 00:08:22.758 17:17:42 -- common/autotest_common.sh@936 -- # '[' -z 2557994 ']' 00:08:22.758 17:17:42 -- common/autotest_common.sh@940 -- # kill -0 2557994 00:08:22.758 17:17:42 -- common/autotest_common.sh@941 -- # uname 00:08:22.758 17:17:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.758 17:17:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2557994 00:08:22.758 17:17:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:22.758 17:17:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:22.758 17:17:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2557994' 00:08:22.758 killing process with pid 2557994 00:08:22.758 17:17:42 -- common/autotest_common.sh@955 -- # kill 2557994 00:08:22.758 17:17:42 -- common/autotest_common.sh@960 -- # wait 2557994 00:08:23.018 17:17:42 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.018 00:08:23.018 real 0m7.863s 00:08:23.018 user 0m30.545s 00:08:23.018 sys 0m1.188s 00:08:23.018 17:17:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.018 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:08:23.018 ************************************ 00:08:23.018 END TEST nvmf_filesystem_in_capsule 00:08:23.018 ************************************ 00:08:23.277 17:17:42 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:23.277 17:17:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:23.277 17:17:42 -- nvmf/common.sh@116 -- # sync 00:08:23.277 17:17:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:23.277 17:17:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:23.277 17:17:42 -- nvmf/common.sh@119 -- # set +e 00:08:23.277 17:17:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:23.277 17:17:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:23.277 rmmod nvme_rdma 00:08:23.277 rmmod nvme_fabrics 00:08:23.277 17:17:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:23.277 17:17:42 -- nvmf/common.sh@123 -- # set -e 00:08:23.277 17:17:42 -- nvmf/common.sh@124 -- # return 0 00:08:23.277 17:17:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:23.277 17:17:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:23.277 17:17:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:23.277 00:08:23.277 real 0m22.843s 00:08:23.277 user 1m3.290s 00:08:23.277 sys 0m7.526s 00:08:23.277 17:17:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.277 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:08:23.277 ************************************ 00:08:23.277 END TEST nvmf_filesystem 00:08:23.277 ************************************ 00:08:23.277 17:17:42 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:23.277 17:17:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:23.277 17:17:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.277 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:08:23.277 ************************************ 00:08:23.277 START TEST nvmf_discovery 00:08:23.277 ************************************ 00:08:23.277 17:17:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:23.277 * Looking for test storage... 00:08:23.277 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 17:17:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.277 17:17:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.277 17:17:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:23.537 17:17:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:23.537 17:17:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:23.537 17:17:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:23.537 17:17:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:23.537 17:17:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:23.537 17:17:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:23.537 17:17:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.537 17:17:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:23.537 17:17:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:23.537 17:17:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:23.537 17:17:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:23.537 17:17:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:23.537 17:17:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:23.537 17:17:43 -- scripts/common.sh@344 -- # : 1 00:08:23.537 17:17:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:23.537 17:17:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.537 17:17:43 -- scripts/common.sh@364 -- # decimal 1 00:08:23.537 17:17:43 -- scripts/common.sh@352 -- # local d=1 00:08:23.537 17:17:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.537 17:17:43 -- scripts/common.sh@354 -- # echo 1 00:08:23.537 17:17:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:23.537 17:17:43 -- scripts/common.sh@365 -- # decimal 2 00:08:23.537 17:17:43 -- scripts/common.sh@352 -- # local d=2 00:08:23.537 17:17:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.537 17:17:43 -- scripts/common.sh@354 -- # echo 2 00:08:23.537 17:17:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:23.537 17:17:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:23.537 17:17:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:23.537 17:17:43 -- scripts/common.sh@367 -- # return 0 00:08:23.537 17:17:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.537 17:17:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:23.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.537 --rc genhtml_branch_coverage=1 00:08:23.537 --rc genhtml_function_coverage=1 00:08:23.537 --rc genhtml_legend=1 00:08:23.537 --rc geninfo_all_blocks=1 00:08:23.537 --rc geninfo_unexecuted_blocks=1 00:08:23.537 00:08:23.537 ' 00:08:23.537 17:17:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:23.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.537 --rc genhtml_branch_coverage=1 00:08:23.537 --rc genhtml_function_coverage=1 00:08:23.537 --rc genhtml_legend=1 00:08:23.537 --rc geninfo_all_blocks=1 00:08:23.537 --rc geninfo_unexecuted_blocks=1 00:08:23.537 00:08:23.537 ' 00:08:23.537 17:17:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:23.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.537 --rc genhtml_branch_coverage=1 00:08:23.537 --rc genhtml_function_coverage=1 00:08:23.537 --rc genhtml_legend=1 00:08:23.537 --rc geninfo_all_blocks=1 00:08:23.537 --rc geninfo_unexecuted_blocks=1 00:08:23.537 00:08:23.537 ' 00:08:23.537 17:17:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:23.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.537 --rc genhtml_branch_coverage=1 00:08:23.537 --rc genhtml_function_coverage=1 00:08:23.537 --rc genhtml_legend=1 00:08:23.537 --rc geninfo_all_blocks=1 00:08:23.537 --rc geninfo_unexecuted_blocks=1 00:08:23.537 00:08:23.537 ' 00:08:23.537 17:17:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.537 17:17:43 -- nvmf/common.sh@7 -- # uname -s 00:08:23.537 17:17:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.537 17:17:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.537 17:17:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.537 17:17:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.538 17:17:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.538 17:17:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.538 17:17:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.538 17:17:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.538 17:17:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.538 17:17:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.538 17:17:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:23.538 17:17:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:23.538 17:17:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.538 17:17:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.538 17:17:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.538 17:17:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:23.538 17:17:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.538 17:17:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.538 17:17:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.538 17:17:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.538 17:17:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.538 17:17:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.538 17:17:43 -- paths/export.sh@5 -- # export PATH 00:08:23.538 17:17:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.538 17:17:43 -- nvmf/common.sh@46 -- # : 0 00:08:23.538 17:17:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:23.538 17:17:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:23.538 17:17:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:23.538 17:17:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.538 17:17:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.538 17:17:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:23.538 17:17:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:23.538 17:17:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:23.538 17:17:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:23.538 17:17:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:23.538 17:17:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:23.538 17:17:43 -- target/discovery.sh@15 -- # hash nvme 00:08:23.538 17:17:43 -- target/discovery.sh@20 -- # nvmftestinit 00:08:23.538 17:17:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:23.538 17:17:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.538 17:17:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:23.538 17:17:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:23.538 17:17:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:23.538 17:17:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.538 17:17:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.538 17:17:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.538 17:17:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:23.538 17:17:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:23.538 17:17:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:23.538 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:08:30.107 17:17:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:30.107 17:17:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:30.107 17:17:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:30.107 17:17:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:30.107 17:17:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:30.107 17:17:49 -- nvmf/common.sh@294 -- # net_devs=() 00:08:30.107 17:17:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@295 -- # e810=() 00:08:30.107 17:17:49 -- nvmf/common.sh@295 -- # local -ga e810 00:08:30.107 17:17:49 -- nvmf/common.sh@296 -- # x722=() 00:08:30.107 17:17:49 -- nvmf/common.sh@296 -- # local -ga x722 00:08:30.107 17:17:49 -- nvmf/common.sh@297 -- # mlx=() 00:08:30.107 17:17:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:30.107 17:17:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.107 17:17:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:30.107 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:30.107 17:17:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:30.107 17:17:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:30.107 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:30.107 17:17:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:30.107 17:17:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.107 17:17:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.107 17:17:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:30.107 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:30.107 17:17:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.107 17:17:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.107 17:17:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:30.107 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:30.107 17:17:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.107 17:17:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:30.107 17:17:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:30.107 17:17:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:30.107 17:17:49 -- nvmf/common.sh@57 -- # uname 00:08:30.107 17:17:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:30.107 17:17:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:30.107 17:17:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:30.107 17:17:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:30.107 17:17:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:30.107 17:17:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:30.107 17:17:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:30.107 17:17:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:30.107 17:17:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:30.107 17:17:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:30.107 17:17:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:30.107 17:17:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:30.107 17:17:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:30.107 17:17:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:30.107 17:17:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:30.107 17:17:49 -- nvmf/common.sh@104 -- # continue 2 00:08:30.107 17:17:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.107 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:30.107 17:17:49 -- nvmf/common.sh@104 -- # continue 2 00:08:30.107 17:17:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:30.107 17:17:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:30.107 17:17:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.107 17:17:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:30.107 17:17:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:30.107 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:30.107 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:30.107 altname enp217s0f0np0 00:08:30.107 altname ens818f0np0 00:08:30.107 inet 192.168.100.8/24 scope global mlx_0_0 00:08:30.107 valid_lft forever preferred_lft forever 00:08:30.107 17:17:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:30.107 17:17:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:30.107 17:17:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:30.107 17:17:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.107 17:17:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:30.107 17:17:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:30.107 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:30.107 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:30.107 altname enp217s0f1np1 00:08:30.107 altname ens818f1np1 00:08:30.107 inet 192.168.100.9/24 scope global mlx_0_1 00:08:30.107 valid_lft forever preferred_lft forever 00:08:30.107 17:17:49 -- nvmf/common.sh@410 -- # return 0 00:08:30.107 17:17:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:30.107 17:17:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:30.107 17:17:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:30.107 17:17:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:30.107 17:17:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:30.107 17:17:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:30.107 17:17:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:30.107 17:17:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:30.107 17:17:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:30.108 17:17:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:30.108 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.108 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:30.108 17:17:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:30.108 17:17:49 -- nvmf/common.sh@104 -- # continue 2 00:08:30.108 17:17:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:30.108 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.108 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:30.108 17:17:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:30.108 17:17:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:30.108 17:17:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:30.108 17:17:49 -- nvmf/common.sh@104 -- # continue 2 00:08:30.108 17:17:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:30.108 17:17:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:30.108 17:17:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.108 17:17:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:30.108 17:17:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:30.108 17:17:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:30.108 17:17:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.108 17:17:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:30.108 192.168.100.9' 00:08:30.108 17:17:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:30.108 192.168.100.9' 00:08:30.108 17:17:49 -- nvmf/common.sh@445 -- # head -n 1 00:08:30.108 17:17:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:30.108 17:17:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:30.108 192.168.100.9' 00:08:30.108 17:17:49 -- nvmf/common.sh@446 -- # tail -n +2 00:08:30.108 17:17:49 -- nvmf/common.sh@446 -- # head -n 1 00:08:30.108 17:17:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:30.108 17:17:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:30.108 17:17:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:30.108 17:17:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:30.108 17:17:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:30.108 17:17:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:30.108 17:17:49 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:30.108 17:17:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:30.108 17:17:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.108 17:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:30.108 17:17:49 -- nvmf/common.sh@469 -- # nvmfpid=2562749 00:08:30.108 17:17:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.108 17:17:49 -- nvmf/common.sh@470 -- # waitforlisten 2562749 00:08:30.108 17:17:49 -- common/autotest_common.sh@829 -- # '[' -z 2562749 ']' 00:08:30.108 17:17:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.108 17:17:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.108 17:17:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.108 17:17:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.108 17:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:30.108 [2024-11-09 17:17:49.451133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.108 [2024-11-09 17:17:49.451181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.108 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.108 [2024-11-09 17:17:49.521929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.108 [2024-11-09 17:17:49.593360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.108 [2024-11-09 17:17:49.593479] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.108 [2024-11-09 17:17:49.593489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.108 [2024-11-09 17:17:49.593497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.108 [2024-11-09 17:17:49.593546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.108 [2024-11-09 17:17:49.593567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.108 [2024-11-09 17:17:49.593654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.108 [2024-11-09 17:17:49.593656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.676 17:17:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.676 17:17:50 -- common/autotest_common.sh@862 -- # return 0 00:08:30.676 17:17:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:30.676 17:17:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.676 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.676 17:17:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.676 17:17:50 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:30.676 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.676 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.676 [2024-11-09 17:17:50.346633] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd3b090/0xd3f580) succeed. 00:08:30.676 [2024-11-09 17:17:50.355790] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd3c680/0xd80c20) succeed. 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@26 -- # seq 1 4 00:08:30.935 17:17:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.935 17:17:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 Null1 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 [2024-11-09 17:17:50.523289] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.935 17:17:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 Null2 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.935 17:17:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 Null3 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:30.935 17:17:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 Null4 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:30.935 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.935 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.935 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.935 17:17:50 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:31.194 00:08:31.194 Discovery Log Number of Records 6, Generation counter 6 00:08:31.194 =====Discovery Log Entry 0====== 00:08:31.194 trtype: rdma 00:08:31.194 adrfam: ipv4 00:08:31.194 subtype: current discovery subsystem 00:08:31.194 treq: not required 00:08:31.194 portid: 0 00:08:31.195 trsvcid: 4420 00:08:31.195 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: explicit discovery connections, duplicate discovery information 00:08:31.195 rdma_prtype: not specified 00:08:31.195 rdma_qptype: connected 00:08:31.195 rdma_cms: rdma-cm 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 =====Discovery Log Entry 1====== 00:08:31.195 trtype: rdma 00:08:31.195 adrfam: ipv4 00:08:31.195 subtype: nvme subsystem 00:08:31.195 treq: not required 00:08:31.195 portid: 0 00:08:31.195 trsvcid: 4420 00:08:31.195 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: none 00:08:31.195 rdma_prtype: not specified 00:08:31.195 rdma_qptype: connected 00:08:31.195 rdma_cms: rdma-cm 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 =====Discovery Log Entry 2====== 00:08:31.195 trtype: rdma 00:08:31.195 adrfam: ipv4 00:08:31.195 subtype: nvme subsystem 00:08:31.195 treq: not required 00:08:31.195 portid: 0 00:08:31.195 trsvcid: 4420 00:08:31.195 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: none 00:08:31.195 rdma_prtype: not specified 00:08:31.195 rdma_qptype: connected 00:08:31.195 rdma_cms: rdma-cm 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 =====Discovery Log Entry 3====== 00:08:31.195 trtype: rdma 00:08:31.195 adrfam: ipv4 00:08:31.195 subtype: nvme subsystem 00:08:31.195 treq: not required 00:08:31.195 portid: 0 00:08:31.195 trsvcid: 4420 00:08:31.195 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: none 00:08:31.195 rdma_prtype: not specified 00:08:31.195 rdma_qptype: connected 00:08:31.195 rdma_cms: rdma-cm 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 =====Discovery Log Entry 4====== 00:08:31.195 trtype: rdma 00:08:31.195 adrfam: ipv4 00:08:31.195 subtype: nvme subsystem 00:08:31.195 treq: not required 00:08:31.195 portid: 0 00:08:31.195 trsvcid: 4420 00:08:31.195 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: none 00:08:31.195 rdma_prtype: not specified 00:08:31.195 rdma_qptype: connected 00:08:31.195 rdma_cms: rdma-cm 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 =====Discovery Log Entry 5====== 00:08:31.195 trtype: rdma 00:08:31.195 adrfam: ipv4 00:08:31.195 subtype: discovery subsystem referral 00:08:31.195 treq: not required 00:08:31.195 portid: 0 00:08:31.195 trsvcid: 4430 00:08:31.195 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.195 traddr: 192.168.100.8 00:08:31.195 eflags: none 00:08:31.195 rdma_prtype: unrecognized 00:08:31.195 rdma_qptype: unrecognized 00:08:31.195 rdma_cms: unrecognized 00:08:31.195 rdma_pkey: 0x0000 00:08:31.195 17:17:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:31.195 Perform nvmf subsystem discovery via RPC 00:08:31.195 17:17:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 [2024-11-09 17:17:50.743720] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:31.195 [ 00:08:31.195 { 00:08:31.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:31.195 "subtype": "Discovery", 00:08:31.195 "listen_addresses": [ 00:08:31.195 { 00:08:31.195 "transport": "RDMA", 00:08:31.195 "trtype": "RDMA", 00:08:31.195 "adrfam": "IPv4", 00:08:31.195 "traddr": "192.168.100.8", 00:08:31.195 "trsvcid": "4420" 00:08:31.195 } 00:08:31.195 ], 00:08:31.195 "allow_any_host": true, 00:08:31.195 "hosts": [] 00:08:31.195 }, 00:08:31.195 { 00:08:31.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.195 "subtype": "NVMe", 00:08:31.195 "listen_addresses": [ 00:08:31.195 { 00:08:31.195 "transport": "RDMA", 00:08:31.195 "trtype": "RDMA", 00:08:31.195 "adrfam": "IPv4", 00:08:31.195 "traddr": "192.168.100.8", 00:08:31.195 "trsvcid": "4420" 00:08:31.195 } 00:08:31.195 ], 00:08:31.195 "allow_any_host": true, 00:08:31.195 "hosts": [], 00:08:31.195 "serial_number": "SPDK00000000000001", 00:08:31.195 "model_number": "SPDK bdev Controller", 00:08:31.195 "max_namespaces": 32, 00:08:31.195 "min_cntlid": 1, 00:08:31.195 "max_cntlid": 65519, 00:08:31.195 "namespaces": [ 00:08:31.195 { 00:08:31.195 "nsid": 1, 00:08:31.195 "bdev_name": "Null1", 00:08:31.195 "name": "Null1", 00:08:31.195 "nguid": "9BB93BBFF5DF457FBE23BBBE883B61AD", 00:08:31.195 "uuid": "9bb93bbf-f5df-457f-be23-bbbe883b61ad" 00:08:31.195 } 00:08:31.195 ] 00:08:31.195 }, 00:08:31.195 { 00:08:31.195 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.195 "subtype": "NVMe", 00:08:31.195 "listen_addresses": [ 00:08:31.195 { 00:08:31.195 "transport": "RDMA", 00:08:31.195 "trtype": "RDMA", 00:08:31.195 "adrfam": "IPv4", 00:08:31.195 "traddr": "192.168.100.8", 00:08:31.195 "trsvcid": "4420" 00:08:31.195 } 00:08:31.195 ], 00:08:31.195 "allow_any_host": true, 00:08:31.195 "hosts": [], 00:08:31.195 "serial_number": "SPDK00000000000002", 00:08:31.195 "model_number": "SPDK bdev Controller", 00:08:31.195 "max_namespaces": 32, 00:08:31.195 "min_cntlid": 1, 00:08:31.195 "max_cntlid": 65519, 00:08:31.195 "namespaces": [ 00:08:31.195 { 00:08:31.195 "nsid": 1, 00:08:31.195 "bdev_name": "Null2", 00:08:31.195 "name": "Null2", 00:08:31.195 "nguid": "00EAB3435BFB45CA83B95301F7D954A9", 00:08:31.195 "uuid": "00eab343-5bfb-45ca-83b9-5301f7d954a9" 00:08:31.195 } 00:08:31.195 ] 00:08:31.195 }, 00:08:31.195 { 00:08:31.195 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:31.195 "subtype": "NVMe", 00:08:31.195 "listen_addresses": [ 00:08:31.195 { 00:08:31.195 "transport": "RDMA", 00:08:31.195 "trtype": "RDMA", 00:08:31.195 "adrfam": "IPv4", 00:08:31.195 "traddr": "192.168.100.8", 00:08:31.195 "trsvcid": "4420" 00:08:31.195 } 00:08:31.195 ], 00:08:31.195 "allow_any_host": true, 00:08:31.195 "hosts": [], 00:08:31.195 "serial_number": "SPDK00000000000003", 00:08:31.195 "model_number": "SPDK bdev Controller", 00:08:31.195 "max_namespaces": 32, 00:08:31.195 "min_cntlid": 1, 00:08:31.195 "max_cntlid": 65519, 00:08:31.195 "namespaces": [ 00:08:31.195 { 00:08:31.195 "nsid": 1, 00:08:31.195 "bdev_name": "Null3", 00:08:31.195 "name": "Null3", 00:08:31.195 "nguid": "02B52FFDAC5E44D5BD4C605F1920F9E0", 00:08:31.195 "uuid": "02b52ffd-ac5e-44d5-bd4c-605f1920f9e0" 00:08:31.195 } 00:08:31.195 ] 00:08:31.195 }, 00:08:31.195 { 00:08:31.195 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:31.195 "subtype": "NVMe", 00:08:31.195 "listen_addresses": [ 00:08:31.195 { 00:08:31.195 "transport": "RDMA", 00:08:31.195 "trtype": "RDMA", 00:08:31.195 "adrfam": "IPv4", 00:08:31.195 "traddr": "192.168.100.8", 00:08:31.195 "trsvcid": "4420" 00:08:31.195 } 00:08:31.195 ], 00:08:31.195 "allow_any_host": true, 00:08:31.195 "hosts": [], 00:08:31.195 "serial_number": "SPDK00000000000004", 00:08:31.195 "model_number": "SPDK bdev Controller", 00:08:31.195 "max_namespaces": 32, 00:08:31.195 "min_cntlid": 1, 00:08:31.195 "max_cntlid": 65519, 00:08:31.195 "namespaces": [ 00:08:31.195 { 00:08:31.195 "nsid": 1, 00:08:31.195 "bdev_name": "Null4", 00:08:31.195 "name": "Null4", 00:08:31.195 "nguid": "13CD8F6E790A4AFEB1F7023CC0A2C19E", 00:08:31.195 "uuid": "13cd8f6e-790a-4afe-b1f7-023cc0a2c19e" 00:08:31.195 } 00:08:31.195 ] 00:08:31.195 } 00:08:31.195 ] 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@42 -- # seq 1 4 00:08:31.195 17:17:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.195 17:17:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.195 17:17:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.195 17:17:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.195 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.195 17:17:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:31.195 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.195 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.196 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.196 17:17:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.196 17:17:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:31.196 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.196 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.196 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.196 17:17:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:31.196 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.196 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.196 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.196 17:17:50 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:31.196 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.196 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.196 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.196 17:17:50 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:31.196 17:17:50 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:31.196 17:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.196 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.196 17:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.196 17:17:50 -- target/discovery.sh@49 -- # check_bdevs= 00:08:31.196 17:17:50 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:31.196 17:17:50 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:31.196 17:17:50 -- target/discovery.sh@57 -- # nvmftestfini 00:08:31.196 17:17:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:31.196 17:17:50 -- nvmf/common.sh@116 -- # sync 00:08:31.196 17:17:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:31.196 17:17:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:31.196 17:17:50 -- nvmf/common.sh@119 -- # set +e 00:08:31.196 17:17:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:31.196 17:17:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:31.196 rmmod nvme_rdma 00:08:31.196 rmmod nvme_fabrics 00:08:31.196 17:17:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:31.196 17:17:50 -- nvmf/common.sh@123 -- # set -e 00:08:31.196 17:17:50 -- nvmf/common.sh@124 -- # return 0 00:08:31.196 17:17:50 -- nvmf/common.sh@477 -- # '[' -n 2562749 ']' 00:08:31.196 17:17:50 -- nvmf/common.sh@478 -- # killprocess 2562749 00:08:31.196 17:17:50 -- common/autotest_common.sh@936 -- # '[' -z 2562749 ']' 00:08:31.196 17:17:50 -- common/autotest_common.sh@940 -- # kill -0 2562749 00:08:31.196 17:17:50 -- common/autotest_common.sh@941 -- # uname 00:08:31.196 17:17:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.196 17:17:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2562749 00:08:31.455 17:17:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.455 17:17:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.455 17:17:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2562749' 00:08:31.455 killing process with pid 2562749 00:08:31.455 17:17:51 -- common/autotest_common.sh@955 -- # kill 2562749 00:08:31.455 [2024-11-09 17:17:51.016960] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:31.455 17:17:51 -- common/autotest_common.sh@960 -- # wait 2562749 00:08:31.715 17:17:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:31.715 17:17:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:31.715 00:08:31.715 real 0m8.392s 00:08:31.715 user 0m8.518s 00:08:31.715 sys 0m5.364s 00:08:31.715 17:17:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.715 17:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:31.715 ************************************ 00:08:31.715 END TEST nvmf_discovery 00:08:31.715 ************************************ 00:08:31.715 17:17:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:31.715 17:17:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.715 17:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.715 17:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:31.715 ************************************ 00:08:31.715 START TEST nvmf_referrals 00:08:31.715 ************************************ 00:08:31.715 17:17:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:31.715 * Looking for test storage... 00:08:31.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:31.715 17:17:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:31.715 17:17:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:31.715 17:17:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:31.975 17:17:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:31.975 17:17:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:31.975 17:17:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:31.975 17:17:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:31.975 17:17:51 -- scripts/common.sh@335 -- # IFS=.-: 00:08:31.975 17:17:51 -- scripts/common.sh@335 -- # read -ra ver1 00:08:31.975 17:17:51 -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.975 17:17:51 -- scripts/common.sh@336 -- # read -ra ver2 00:08:31.975 17:17:51 -- scripts/common.sh@337 -- # local 'op=<' 00:08:31.975 17:17:51 -- scripts/common.sh@339 -- # ver1_l=2 00:08:31.975 17:17:51 -- scripts/common.sh@340 -- # ver2_l=1 00:08:31.975 17:17:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:31.975 17:17:51 -- scripts/common.sh@343 -- # case "$op" in 00:08:31.975 17:17:51 -- scripts/common.sh@344 -- # : 1 00:08:31.975 17:17:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:31.975 17:17:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.975 17:17:51 -- scripts/common.sh@364 -- # decimal 1 00:08:31.975 17:17:51 -- scripts/common.sh@352 -- # local d=1 00:08:31.975 17:17:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.975 17:17:51 -- scripts/common.sh@354 -- # echo 1 00:08:31.975 17:17:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:31.975 17:17:51 -- scripts/common.sh@365 -- # decimal 2 00:08:31.975 17:17:51 -- scripts/common.sh@352 -- # local d=2 00:08:31.975 17:17:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.975 17:17:51 -- scripts/common.sh@354 -- # echo 2 00:08:31.975 17:17:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:31.975 17:17:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:31.975 17:17:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:31.975 17:17:51 -- scripts/common.sh@367 -- # return 0 00:08:31.975 17:17:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.975 17:17:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:31.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.975 --rc genhtml_branch_coverage=1 00:08:31.976 --rc genhtml_function_coverage=1 00:08:31.976 --rc genhtml_legend=1 00:08:31.976 --rc geninfo_all_blocks=1 00:08:31.976 --rc geninfo_unexecuted_blocks=1 00:08:31.976 00:08:31.976 ' 00:08:31.976 17:17:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:31.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.976 --rc genhtml_branch_coverage=1 00:08:31.976 --rc genhtml_function_coverage=1 00:08:31.976 --rc genhtml_legend=1 00:08:31.976 --rc geninfo_all_blocks=1 00:08:31.976 --rc geninfo_unexecuted_blocks=1 00:08:31.976 00:08:31.976 ' 00:08:31.976 17:17:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:31.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.976 --rc genhtml_branch_coverage=1 00:08:31.976 --rc genhtml_function_coverage=1 00:08:31.976 --rc genhtml_legend=1 00:08:31.976 --rc geninfo_all_blocks=1 00:08:31.976 --rc geninfo_unexecuted_blocks=1 00:08:31.976 00:08:31.976 ' 00:08:31.976 17:17:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:31.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.976 --rc genhtml_branch_coverage=1 00:08:31.976 --rc genhtml_function_coverage=1 00:08:31.976 --rc genhtml_legend=1 00:08:31.976 --rc geninfo_all_blocks=1 00:08:31.976 --rc geninfo_unexecuted_blocks=1 00:08:31.976 00:08:31.976 ' 00:08:31.976 17:17:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.976 17:17:51 -- nvmf/common.sh@7 -- # uname -s 00:08:31.976 17:17:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.976 17:17:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.976 17:17:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.976 17:17:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.976 17:17:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.976 17:17:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.976 17:17:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.976 17:17:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.976 17:17:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.976 17:17:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.976 17:17:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:31.976 17:17:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:31.976 17:17:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.976 17:17:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.976 17:17:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.976 17:17:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:31.976 17:17:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.976 17:17:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.976 17:17:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.976 17:17:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.976 17:17:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.976 17:17:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.976 17:17:51 -- paths/export.sh@5 -- # export PATH 00:08:31.976 17:17:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.976 17:17:51 -- nvmf/common.sh@46 -- # : 0 00:08:31.976 17:17:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:31.976 17:17:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:31.976 17:17:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:31.976 17:17:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.976 17:17:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.976 17:17:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:31.976 17:17:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:31.976 17:17:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:31.976 17:17:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:31.976 17:17:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:31.976 17:17:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:31.976 17:17:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:31.976 17:17:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:31.976 17:17:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:31.976 17:17:51 -- target/referrals.sh@37 -- # nvmftestinit 00:08:31.976 17:17:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:31.976 17:17:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.976 17:17:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:31.976 17:17:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:31.976 17:17:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:31.976 17:17:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.976 17:17:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.976 17:17:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.976 17:17:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:31.976 17:17:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:31.976 17:17:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:31.976 17:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:38.544 17:17:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.544 17:17:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.544 17:17:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.544 17:17:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.544 17:17:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.544 17:17:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.544 17:17:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.544 17:17:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.544 17:17:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.544 17:17:57 -- nvmf/common.sh@295 -- # e810=() 00:08:38.544 17:17:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.544 17:17:57 -- nvmf/common.sh@296 -- # x722=() 00:08:38.544 17:17:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.544 17:17:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.544 17:17:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.544 17:17:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.544 17:17:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.544 17:17:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:38.544 17:17:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:38.544 17:17:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:38.544 17:17:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.544 17:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.544 17:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:38.544 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:38.544 17:17:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.544 17:17:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.545 17:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:38.545 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:38.545 17:17:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.545 17:17:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.545 17:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.545 17:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:38.545 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.545 17:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.545 17:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.545 17:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:38.545 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.545 17:17:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.545 17:17:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:38.545 17:17:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:38.545 17:17:57 -- nvmf/common.sh@57 -- # uname 00:08:38.545 17:17:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:38.545 17:17:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:38.545 17:17:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:38.545 17:17:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:38.545 17:17:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:38.545 17:17:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:38.545 17:17:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:38.545 17:17:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:38.545 17:17:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:38.545 17:17:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.545 17:17:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:38.545 17:17:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.545 17:17:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.545 17:17:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.545 17:17:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.545 17:17:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@104 -- # continue 2 00:08:38.545 17:17:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@104 -- # continue 2 00:08:38.545 17:17:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.545 17:17:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.545 17:17:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:38.545 17:17:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:38.545 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.545 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:38.545 altname enp217s0f0np0 00:08:38.545 altname ens818f0np0 00:08:38.545 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.545 valid_lft forever preferred_lft forever 00:08:38.545 17:17:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.545 17:17:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.545 17:17:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:38.545 17:17:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:38.545 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.545 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:38.545 altname enp217s0f1np1 00:08:38.545 altname ens818f1np1 00:08:38.545 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.545 valid_lft forever preferred_lft forever 00:08:38.545 17:17:57 -- nvmf/common.sh@410 -- # return 0 00:08:38.545 17:17:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.545 17:17:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.545 17:17:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:38.545 17:17:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:38.545 17:17:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.545 17:17:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.545 17:17:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.545 17:17:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.545 17:17:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.545 17:17:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@104 -- # continue 2 00:08:38.545 17:17:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.545 17:17:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.545 17:17:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@104 -- # continue 2 00:08:38.545 17:17:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.545 17:17:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.545 17:17:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.545 17:17:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.545 17:17:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.545 17:17:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.545 192.168.100.9' 00:08:38.545 17:17:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:38.545 192.168.100.9' 00:08:38.545 17:17:57 -- nvmf/common.sh@445 -- # head -n 1 00:08:38.545 17:17:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.545 17:17:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:38.545 192.168.100.9' 00:08:38.545 17:17:57 -- nvmf/common.sh@446 -- # tail -n +2 00:08:38.545 17:17:57 -- nvmf/common.sh@446 -- # head -n 1 00:08:38.545 17:17:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.545 17:17:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:38.545 17:17:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.545 17:17:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:38.545 17:17:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:38.545 17:17:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:38.545 17:17:57 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:38.545 17:17:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.545 17:17:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.545 17:17:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.545 17:17:57 -- nvmf/common.sh@469 -- # nvmfpid=2566380 00:08:38.545 17:17:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.545 17:17:57 -- nvmf/common.sh@470 -- # waitforlisten 2566380 00:08:38.545 17:17:57 -- common/autotest_common.sh@829 -- # '[' -z 2566380 ']' 00:08:38.545 17:17:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.545 17:17:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.545 17:17:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.545 17:17:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.545 17:17:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.546 [2024-11-09 17:17:57.869621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.546 [2024-11-09 17:17:57.869669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.546 [2024-11-09 17:17:57.940547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.546 [2024-11-09 17:17:58.014555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.546 [2024-11-09 17:17:58.014661] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.546 [2024-11-09 17:17:58.014670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.546 [2024-11-09 17:17:58.014680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.546 [2024-11-09 17:17:58.014733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.546 [2024-11-09 17:17:58.014846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.546 [2024-11-09 17:17:58.014933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.546 [2024-11-09 17:17:58.014935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.113 17:17:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.113 17:17:58 -- common/autotest_common.sh@862 -- # return 0 00:08:39.113 17:17:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.113 17:17:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.113 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.113 17:17:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.113 17:17:58 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:39.113 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.113 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.113 [2024-11-09 17:17:58.759813] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f9c090/0x1fa0580) succeed. 00:08:39.113 [2024-11-09 17:17:58.769054] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f9d680/0x1fe1c20) succeed. 00:08:39.372 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.372 17:17:58 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:39.372 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.372 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.372 [2024-11-09 17:17:58.892371] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:39.372 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.372 17:17:58 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.372 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.372 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.372 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.372 17:17:58 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.372 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.372 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.372 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.372 17:17:58 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.372 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.372 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.372 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.372 17:17:58 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.372 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.372 17:17:58 -- target/referrals.sh@48 -- # jq length 00:08:39.372 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.373 17:17:58 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:39.373 17:17:58 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:39.373 17:17:58 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.373 17:17:58 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.373 17:17:58 -- target/referrals.sh@21 -- # sort 00:08:39.373 17:17:58 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.373 17:17:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.373 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 17:17:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.373 17:17:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.373 17:17:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.373 17:17:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:39.373 17:17:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.373 17:17:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.373 17:17:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.373 17:17:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.373 17:17:59 -- target/referrals.sh@26 -- # sort 00:08:39.373 17:17:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.373 17:17:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.373 17:17:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.373 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.373 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.373 17:17:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.373 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.373 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.633 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.633 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.633 17:17:59 -- target/referrals.sh@56 -- # jq length 00:08:39.633 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.633 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:39.633 17:17:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:39.633 17:17:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.633 17:17:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # sort 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # echo 00:08:39.633 17:17:59 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:39.633 17:17:59 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:39.633 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.633 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:39.633 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.633 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:39.633 17:17:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.633 17:17:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.633 17:17:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.633 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.633 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 17:17:59 -- target/referrals.sh@21 -- # sort 00:08:39.633 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:39.633 17:17:59 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.633 17:17:59 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:39.633 17:17:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.633 17:17:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.633 17:17:59 -- target/referrals.sh@26 -- # sort 00:08:39.892 17:17:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:39.892 17:17:59 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.892 17:17:59 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:39.893 17:17:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:39.893 17:17:59 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:39.893 17:17:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.893 17:17:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:39.893 17:17:59 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:39.893 17:17:59 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:39.893 17:17:59 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:39.893 17:17:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:39.893 17:17:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.893 17:17:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.151 17:17:59 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.151 17:17:59 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.151 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.151 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.151 17:17:59 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.151 17:17:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.151 17:17:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.151 17:17:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.151 17:17:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.151 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 17:17:59 -- target/referrals.sh@21 -- # sort 00:08:40.151 17:17:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.151 17:17:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.151 17:17:59 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.151 17:17:59 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.151 17:17:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.151 17:17:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.151 17:17:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.151 17:17:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.151 17:17:59 -- target/referrals.sh@26 -- # sort 00:08:40.151 17:17:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:40.151 17:17:59 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.151 17:17:59 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:40.151 17:17:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.151 17:17:59 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:40.151 17:17:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.151 17:17:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.410 17:17:59 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:40.410 17:17:59 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:40.410 17:17:59 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.410 17:17:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.410 17:17:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.410 17:17:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.410 17:18:00 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.410 17:18:00 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:40.410 17:18:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.410 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.410 17:18:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.410 17:18:00 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.410 17:18:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.410 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.410 17:18:00 -- target/referrals.sh@82 -- # jq length 00:08:40.410 17:18:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.410 17:18:00 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:40.410 17:18:00 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:40.410 17:18:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.410 17:18:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.410 17:18:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.410 17:18:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.410 17:18:00 -- target/referrals.sh@26 -- # sort 00:08:40.669 17:18:00 -- target/referrals.sh@26 -- # echo 00:08:40.669 17:18:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:40.669 17:18:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:40.669 17:18:00 -- target/referrals.sh@86 -- # nvmftestfini 00:08:40.669 17:18:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.669 17:18:00 -- nvmf/common.sh@116 -- # sync 00:08:40.669 17:18:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:40.669 17:18:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:40.669 17:18:00 -- nvmf/common.sh@119 -- # set +e 00:08:40.669 17:18:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.669 17:18:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:40.669 rmmod nvme_rdma 00:08:40.669 rmmod nvme_fabrics 00:08:40.669 17:18:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.669 17:18:00 -- nvmf/common.sh@123 -- # set -e 00:08:40.669 17:18:00 -- nvmf/common.sh@124 -- # return 0 00:08:40.669 17:18:00 -- nvmf/common.sh@477 -- # '[' -n 2566380 ']' 00:08:40.669 17:18:00 -- nvmf/common.sh@478 -- # killprocess 2566380 00:08:40.669 17:18:00 -- common/autotest_common.sh@936 -- # '[' -z 2566380 ']' 00:08:40.669 17:18:00 -- common/autotest_common.sh@940 -- # kill -0 2566380 00:08:40.669 17:18:00 -- common/autotest_common.sh@941 -- # uname 00:08:40.669 17:18:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.669 17:18:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2566380 00:08:40.669 17:18:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.669 17:18:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.669 17:18:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2566380' 00:08:40.669 killing process with pid 2566380 00:08:40.669 17:18:00 -- common/autotest_common.sh@955 -- # kill 2566380 00:08:40.669 17:18:00 -- common/autotest_common.sh@960 -- # wait 2566380 00:08:40.928 17:18:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.928 17:18:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:40.928 00:08:40.928 real 0m9.314s 00:08:40.928 user 0m12.938s 00:08:40.928 sys 0m5.600s 00:08:40.928 17:18:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.928 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.928 ************************************ 00:08:40.928 END TEST nvmf_referrals 00:08:40.928 ************************************ 00:08:40.928 17:18:00 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:40.928 17:18:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.928 17:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.188 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.188 ************************************ 00:08:41.188 START TEST nvmf_connect_disconnect 00:08:41.188 ************************************ 00:08:41.188 17:18:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:41.188 * Looking for test storage... 00:08:41.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.188 17:18:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.188 17:18:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.188 17:18:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.188 17:18:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.188 17:18:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.188 17:18:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.188 17:18:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:41.188 17:18:00 -- scripts/common.sh@335 -- # IFS=.-: 00:08:41.188 17:18:00 -- scripts/common.sh@335 -- # read -ra ver1 00:08:41.188 17:18:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.188 17:18:00 -- scripts/common.sh@336 -- # read -ra ver2 00:08:41.188 17:18:00 -- scripts/common.sh@337 -- # local 'op=<' 00:08:41.188 17:18:00 -- scripts/common.sh@339 -- # ver1_l=2 00:08:41.188 17:18:00 -- scripts/common.sh@340 -- # ver2_l=1 00:08:41.188 17:18:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:41.188 17:18:00 -- scripts/common.sh@343 -- # case "$op" in 00:08:41.188 17:18:00 -- scripts/common.sh@344 -- # : 1 00:08:41.188 17:18:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:41.188 17:18:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.188 17:18:00 -- scripts/common.sh@364 -- # decimal 1 00:08:41.188 17:18:00 -- scripts/common.sh@352 -- # local d=1 00:08:41.188 17:18:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.188 17:18:00 -- scripts/common.sh@354 -- # echo 1 00:08:41.188 17:18:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:41.188 17:18:00 -- scripts/common.sh@365 -- # decimal 2 00:08:41.188 17:18:00 -- scripts/common.sh@352 -- # local d=2 00:08:41.188 17:18:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.188 17:18:00 -- scripts/common.sh@354 -- # echo 2 00:08:41.188 17:18:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:41.188 17:18:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:41.188 17:18:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:41.188 17:18:00 -- scripts/common.sh@367 -- # return 0 00:08:41.188 17:18:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.188 17:18:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.188 --rc genhtml_branch_coverage=1 00:08:41.188 --rc genhtml_function_coverage=1 00:08:41.188 --rc genhtml_legend=1 00:08:41.188 --rc geninfo_all_blocks=1 00:08:41.188 --rc geninfo_unexecuted_blocks=1 00:08:41.188 00:08:41.188 ' 00:08:41.188 17:18:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.188 --rc genhtml_branch_coverage=1 00:08:41.188 --rc genhtml_function_coverage=1 00:08:41.188 --rc genhtml_legend=1 00:08:41.188 --rc geninfo_all_blocks=1 00:08:41.188 --rc geninfo_unexecuted_blocks=1 00:08:41.188 00:08:41.188 ' 00:08:41.188 17:18:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.188 --rc genhtml_branch_coverage=1 00:08:41.188 --rc genhtml_function_coverage=1 00:08:41.188 --rc genhtml_legend=1 00:08:41.188 --rc geninfo_all_blocks=1 00:08:41.188 --rc geninfo_unexecuted_blocks=1 00:08:41.188 00:08:41.188 ' 00:08:41.188 17:18:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:41.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.188 --rc genhtml_branch_coverage=1 00:08:41.188 --rc genhtml_function_coverage=1 00:08:41.188 --rc genhtml_legend=1 00:08:41.188 --rc geninfo_all_blocks=1 00:08:41.188 --rc geninfo_unexecuted_blocks=1 00:08:41.188 00:08:41.188 ' 00:08:41.188 17:18:00 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.188 17:18:00 -- nvmf/common.sh@7 -- # uname -s 00:08:41.188 17:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.188 17:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.188 17:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.188 17:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.188 17:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.188 17:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.188 17:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.188 17:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.188 17:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.188 17:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.189 17:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:41.189 17:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:41.189 17:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.189 17:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.189 17:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.189 17:18:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.189 17:18:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.189 17:18:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.189 17:18:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.189 17:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.189 17:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.189 17:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.189 17:18:00 -- paths/export.sh@5 -- # export PATH 00:08:41.189 17:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.189 17:18:00 -- nvmf/common.sh@46 -- # : 0 00:08:41.189 17:18:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.189 17:18:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.189 17:18:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.189 17:18:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.189 17:18:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.189 17:18:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.189 17:18:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.189 17:18:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.189 17:18:00 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.189 17:18:00 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.189 17:18:00 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:41.189 17:18:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:41.189 17:18:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.189 17:18:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.189 17:18:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.189 17:18:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.189 17:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.189 17:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.189 17:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.189 17:18:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:41.189 17:18:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:41.189 17:18:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:41.189 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.762 17:18:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:47.762 17:18:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:47.762 17:18:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:47.762 17:18:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:47.762 17:18:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:47.762 17:18:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:47.762 17:18:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:47.763 17:18:07 -- nvmf/common.sh@294 -- # net_devs=() 00:08:47.763 17:18:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:47.763 17:18:07 -- nvmf/common.sh@295 -- # e810=() 00:08:47.763 17:18:07 -- nvmf/common.sh@295 -- # local -ga e810 00:08:47.763 17:18:07 -- nvmf/common.sh@296 -- # x722=() 00:08:47.763 17:18:07 -- nvmf/common.sh@296 -- # local -ga x722 00:08:47.763 17:18:07 -- nvmf/common.sh@297 -- # mlx=() 00:08:47.763 17:18:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:47.763 17:18:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.763 17:18:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:47.763 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:47.763 17:18:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.763 17:18:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:47.763 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:47.763 17:18:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.763 17:18:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.763 17:18:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.763 17:18:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:47.763 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.763 17:18:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.763 17:18:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:47.763 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.763 17:18:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:47.763 17:18:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:47.763 17:18:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:47.763 17:18:07 -- nvmf/common.sh@57 -- # uname 00:08:47.763 17:18:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:47.763 17:18:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:47.763 17:18:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:47.763 17:18:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:47.763 17:18:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:47.763 17:18:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:47.763 17:18:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:47.763 17:18:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:47.763 17:18:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:47.763 17:18:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.763 17:18:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:47.763 17:18:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.763 17:18:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:47.763 17:18:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:47.763 17:18:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.763 17:18:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@104 -- # continue 2 00:08:47.763 17:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@104 -- # continue 2 00:08:47.763 17:18:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:47.763 17:18:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:47.763 17:18:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:47.763 17:18:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:47.763 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.763 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:47.763 altname enp217s0f0np0 00:08:47.763 altname ens818f0np0 00:08:47.763 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.763 valid_lft forever preferred_lft forever 00:08:47.763 17:18:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:47.763 17:18:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:47.763 17:18:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:47.763 17:18:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:47.763 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.763 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:47.763 altname enp217s0f1np1 00:08:47.763 altname ens818f1np1 00:08:47.763 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.763 valid_lft forever preferred_lft forever 00:08:47.763 17:18:07 -- nvmf/common.sh@410 -- # return 0 00:08:47.763 17:18:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:47.763 17:18:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.763 17:18:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:47.763 17:18:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:47.763 17:18:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.763 17:18:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:47.763 17:18:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:47.763 17:18:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.763 17:18:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:47.763 17:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@104 -- # continue 2 00:08:47.763 17:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.763 17:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.763 17:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:47.763 17:18:07 -- nvmf/common.sh@104 -- # continue 2 00:08:47.763 17:18:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:47.763 17:18:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:47.763 17:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:47.764 17:18:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:47.764 17:18:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:47.764 17:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:47.764 17:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:47.764 17:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:47.764 17:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:47.764 17:18:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.764 192.168.100.9' 00:08:47.764 17:18:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:47.764 192.168.100.9' 00:08:47.764 17:18:07 -- nvmf/common.sh@445 -- # head -n 1 00:08:47.764 17:18:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.764 17:18:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:47.764 192.168.100.9' 00:08:47.764 17:18:07 -- nvmf/common.sh@446 -- # tail -n +2 00:08:47.764 17:18:07 -- nvmf/common.sh@446 -- # head -n 1 00:08:47.764 17:18:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.764 17:18:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:47.764 17:18:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.764 17:18:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:47.764 17:18:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:47.764 17:18:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:47.764 17:18:07 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:47.764 17:18:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:47.764 17:18:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.764 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.764 17:18:07 -- nvmf/common.sh@469 -- # nvmfpid=2570850 00:08:47.764 17:18:07 -- nvmf/common.sh@470 -- # waitforlisten 2570850 00:08:47.764 17:18:07 -- common/autotest_common.sh@829 -- # '[' -z 2570850 ']' 00:08:47.764 17:18:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.764 17:18:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.764 17:18:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.764 17:18:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.764 17:18:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.764 17:18:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.764 [2024-11-09 17:18:07.429967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.764 [2024-11-09 17:18:07.430020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.764 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.764 [2024-11-09 17:18:07.499962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.023 [2024-11-09 17:18:07.575821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.023 [2024-11-09 17:18:07.575943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.023 [2024-11-09 17:18:07.575953] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.023 [2024-11-09 17:18:07.575962] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.023 [2024-11-09 17:18:07.576007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.023 [2024-11-09 17:18:07.576100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.023 [2024-11-09 17:18:07.576185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.023 [2024-11-09 17:18:07.576187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.592 17:18:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.592 17:18:08 -- common/autotest_common.sh@862 -- # return 0 00:08:48.592 17:18:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:48.592 17:18:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.592 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.592 17:18:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.592 17:18:08 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:48.592 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.592 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.592 [2024-11-09 17:18:08.292773] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:48.592 [2024-11-09 17:18:08.313707] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f7d090/0x1f81580) succeed. 00:08:48.592 [2024-11-09 17:18:08.322783] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f7e680/0x1fc2c20) succeed. 00:08:48.852 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.852 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.852 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.852 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.852 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.852 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.852 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.852 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.852 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.852 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:48.852 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.852 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.852 [2024-11-09 17:18:08.462391] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:48.852 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.852 17:18:08 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:48.853 17:18:08 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:48.853 17:18:08 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:48.853 17:18:08 -- target/connect_disconnect.sh@34 -- # set +x 00:08:52.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.073 17:23:23 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:04.073 17:23:23 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:04.073 17:23:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:04.073 17:23:23 -- nvmf/common.sh@116 -- # sync 00:14:04.073 17:23:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:04.073 17:23:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:04.073 17:23:23 -- nvmf/common.sh@119 -- # set +e 00:14:04.073 17:23:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:04.073 17:23:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:04.073 rmmod nvme_rdma 00:14:04.073 rmmod nvme_fabrics 00:14:04.073 17:23:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:04.073 17:23:23 -- nvmf/common.sh@123 -- # set -e 00:14:04.073 17:23:23 -- nvmf/common.sh@124 -- # return 0 00:14:04.073 17:23:23 -- nvmf/common.sh@477 -- # '[' -n 2570850 ']' 00:14:04.073 17:23:23 -- nvmf/common.sh@478 -- # killprocess 2570850 00:14:04.073 17:23:23 -- common/autotest_common.sh@936 -- # '[' -z 2570850 ']' 00:14:04.073 17:23:23 -- common/autotest_common.sh@940 -- # kill -0 2570850 00:14:04.073 17:23:23 -- common/autotest_common.sh@941 -- # uname 00:14:04.073 17:23:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.073 17:23:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2570850 00:14:04.073 17:23:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:04.073 17:23:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:04.073 17:23:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2570850' 00:14:04.073 killing process with pid 2570850 00:14:04.073 17:23:23 -- common/autotest_common.sh@955 -- # kill 2570850 00:14:04.073 17:23:23 -- common/autotest_common.sh@960 -- # wait 2570850 00:14:04.333 17:23:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:04.333 17:23:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:04.333 00:14:04.333 real 5m23.192s 00:14:04.333 user 21m2.509s 00:14:04.333 sys 0m17.854s 00:14:04.333 17:23:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:04.333 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:04.333 ************************************ 00:14:04.333 END TEST nvmf_connect_disconnect 00:14:04.333 ************************************ 00:14:04.333 17:23:23 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:04.333 17:23:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.333 17:23:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.333 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:04.333 ************************************ 00:14:04.333 START TEST nvmf_multitarget 00:14:04.333 ************************************ 00:14:04.333 17:23:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:04.333 * Looking for test storage... 00:14:04.333 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:04.333 17:23:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:04.333 17:23:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:04.333 17:23:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:04.593 17:23:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:04.593 17:23:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:04.593 17:23:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:04.593 17:23:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:04.593 17:23:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:04.593 17:23:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:04.593 17:23:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.593 17:23:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:04.593 17:23:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:04.593 17:23:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:04.593 17:23:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:04.593 17:23:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:04.593 17:23:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:04.593 17:23:24 -- scripts/common.sh@344 -- # : 1 00:14:04.593 17:23:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:04.593 17:23:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.593 17:23:24 -- scripts/common.sh@364 -- # decimal 1 00:14:04.593 17:23:24 -- scripts/common.sh@352 -- # local d=1 00:14:04.593 17:23:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.593 17:23:24 -- scripts/common.sh@354 -- # echo 1 00:14:04.593 17:23:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:04.593 17:23:24 -- scripts/common.sh@365 -- # decimal 2 00:14:04.593 17:23:24 -- scripts/common.sh@352 -- # local d=2 00:14:04.593 17:23:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.593 17:23:24 -- scripts/common.sh@354 -- # echo 2 00:14:04.593 17:23:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:04.593 17:23:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:04.593 17:23:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:04.593 17:23:24 -- scripts/common.sh@367 -- # return 0 00:14:04.593 17:23:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.593 17:23:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.593 --rc genhtml_branch_coverage=1 00:14:04.593 --rc genhtml_function_coverage=1 00:14:04.593 --rc genhtml_legend=1 00:14:04.593 --rc geninfo_all_blocks=1 00:14:04.593 --rc geninfo_unexecuted_blocks=1 00:14:04.593 00:14:04.593 ' 00:14:04.593 17:23:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.593 --rc genhtml_branch_coverage=1 00:14:04.593 --rc genhtml_function_coverage=1 00:14:04.593 --rc genhtml_legend=1 00:14:04.593 --rc geninfo_all_blocks=1 00:14:04.593 --rc geninfo_unexecuted_blocks=1 00:14:04.593 00:14:04.593 ' 00:14:04.593 17:23:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.593 --rc genhtml_branch_coverage=1 00:14:04.593 --rc genhtml_function_coverage=1 00:14:04.593 --rc genhtml_legend=1 00:14:04.593 --rc geninfo_all_blocks=1 00:14:04.593 --rc geninfo_unexecuted_blocks=1 00:14:04.593 00:14:04.593 ' 00:14:04.593 17:23:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.593 --rc genhtml_branch_coverage=1 00:14:04.593 --rc genhtml_function_coverage=1 00:14:04.593 --rc genhtml_legend=1 00:14:04.593 --rc geninfo_all_blocks=1 00:14:04.593 --rc geninfo_unexecuted_blocks=1 00:14:04.593 00:14:04.593 ' 00:14:04.593 17:23:24 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.593 17:23:24 -- nvmf/common.sh@7 -- # uname -s 00:14:04.593 17:23:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.593 17:23:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.593 17:23:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.593 17:23:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.593 17:23:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.593 17:23:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.593 17:23:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.593 17:23:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.593 17:23:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.593 17:23:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.593 17:23:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:04.593 17:23:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:04.593 17:23:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.593 17:23:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.593 17:23:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.593 17:23:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:04.593 17:23:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.593 17:23:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.593 17:23:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.593 17:23:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.593 17:23:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.593 17:23:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.593 17:23:24 -- paths/export.sh@5 -- # export PATH 00:14:04.593 17:23:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.593 17:23:24 -- nvmf/common.sh@46 -- # : 0 00:14:04.593 17:23:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:04.593 17:23:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:04.593 17:23:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:04.593 17:23:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.593 17:23:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.593 17:23:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:04.593 17:23:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:04.593 17:23:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:04.593 17:23:24 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:04.593 17:23:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:04.593 17:23:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:04.593 17:23:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.593 17:23:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:04.593 17:23:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:04.593 17:23:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:04.593 17:23:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.593 17:23:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.593 17:23:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.593 17:23:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:04.593 17:23:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:04.593 17:23:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:04.593 17:23:24 -- common/autotest_common.sh@10 -- # set +x 00:14:11.166 17:23:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:11.166 17:23:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:11.166 17:23:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:11.166 17:23:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:11.166 17:23:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:11.166 17:23:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:11.166 17:23:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:11.166 17:23:30 -- nvmf/common.sh@294 -- # net_devs=() 00:14:11.166 17:23:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:11.166 17:23:30 -- nvmf/common.sh@295 -- # e810=() 00:14:11.166 17:23:30 -- nvmf/common.sh@295 -- # local -ga e810 00:14:11.166 17:23:30 -- nvmf/common.sh@296 -- # x722=() 00:14:11.166 17:23:30 -- nvmf/common.sh@296 -- # local -ga x722 00:14:11.166 17:23:30 -- nvmf/common.sh@297 -- # mlx=() 00:14:11.166 17:23:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:11.166 17:23:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.166 17:23:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:11.166 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:11.166 17:23:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:11.166 17:23:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:11.166 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:11.166 17:23:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:11.166 17:23:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.166 17:23:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.166 17:23:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:11.166 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:11.166 17:23:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.166 17:23:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.166 17:23:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:11.166 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:11.166 17:23:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.166 17:23:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:11.166 17:23:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:11.166 17:23:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:11.166 17:23:30 -- nvmf/common.sh@57 -- # uname 00:14:11.166 17:23:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:11.166 17:23:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:11.166 17:23:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:11.166 17:23:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:11.166 17:23:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:11.166 17:23:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:11.166 17:23:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:11.166 17:23:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:11.166 17:23:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:11.166 17:23:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:11.166 17:23:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:11.166 17:23:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:11.166 17:23:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:11.166 17:23:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:11.166 17:23:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:11.166 17:23:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:11.166 17:23:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:11.166 17:23:30 -- nvmf/common.sh@104 -- # continue 2 00:14:11.166 17:23:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.166 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:11.166 17:23:30 -- nvmf/common.sh@104 -- # continue 2 00:14:11.166 17:23:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:11.166 17:23:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:11.166 17:23:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:11.166 17:23:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:11.166 17:23:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:11.166 17:23:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:11.166 17:23:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:11.166 17:23:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:11.166 17:23:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:11.166 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:11.166 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:11.166 altname enp217s0f0np0 00:14:11.166 altname ens818f0np0 00:14:11.166 inet 192.168.100.8/24 scope global mlx_0_0 00:14:11.166 valid_lft forever preferred_lft forever 00:14:11.166 17:23:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:11.166 17:23:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:11.166 17:23:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:11.166 17:23:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:11.167 17:23:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:11.167 17:23:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:11.167 17:23:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:11.167 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:11.167 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:11.167 altname enp217s0f1np1 00:14:11.167 altname ens818f1np1 00:14:11.167 inet 192.168.100.9/24 scope global mlx_0_1 00:14:11.167 valid_lft forever preferred_lft forever 00:14:11.167 17:23:30 -- nvmf/common.sh@410 -- # return 0 00:14:11.167 17:23:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:11.167 17:23:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:11.167 17:23:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:11.167 17:23:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:11.167 17:23:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:11.167 17:23:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:11.167 17:23:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:11.167 17:23:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:11.167 17:23:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:11.167 17:23:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:11.167 17:23:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:11.167 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.167 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:11.167 17:23:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:11.167 17:23:30 -- nvmf/common.sh@104 -- # continue 2 00:14:11.167 17:23:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:11.167 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.167 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:11.167 17:23:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:11.167 17:23:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:11.167 17:23:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:11.167 17:23:30 -- nvmf/common.sh@104 -- # continue 2 00:14:11.167 17:23:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:11.167 17:23:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:11.167 17:23:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:11.167 17:23:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:11.167 17:23:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:11.167 17:23:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:11.167 17:23:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:11.167 17:23:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:11.167 192.168.100.9' 00:14:11.167 17:23:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:11.167 192.168.100.9' 00:14:11.167 17:23:30 -- nvmf/common.sh@445 -- # head -n 1 00:14:11.167 17:23:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:11.167 17:23:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:11.167 192.168.100.9' 00:14:11.167 17:23:30 -- nvmf/common.sh@446 -- # head -n 1 00:14:11.167 17:23:30 -- nvmf/common.sh@446 -- # tail -n +2 00:14:11.167 17:23:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:11.167 17:23:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:11.167 17:23:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:11.167 17:23:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:11.167 17:23:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:11.167 17:23:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:11.167 17:23:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:11.167 17:23:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:11.167 17:23:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.167 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.167 17:23:30 -- nvmf/common.sh@469 -- # nvmfpid=2630589 00:14:11.167 17:23:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.167 17:23:30 -- nvmf/common.sh@470 -- # waitforlisten 2630589 00:14:11.167 17:23:30 -- common/autotest_common.sh@829 -- # '[' -z 2630589 ']' 00:14:11.167 17:23:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.167 17:23:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.167 17:23:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.167 17:23:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.167 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.167 [2024-11-09 17:23:30.571605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:11.167 [2024-11-09 17:23:30.571661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.167 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.167 [2024-11-09 17:23:30.642826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.167 [2024-11-09 17:23:30.718911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.167 [2024-11-09 17:23:30.719024] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.167 [2024-11-09 17:23:30.719034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.167 [2024-11-09 17:23:30.719044] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.167 [2024-11-09 17:23:30.719094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.167 [2024-11-09 17:23:30.719189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.167 [2024-11-09 17:23:30.719251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.167 [2024-11-09 17:23:30.719253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.735 17:23:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.735 17:23:31 -- common/autotest_common.sh@862 -- # return 0 00:14:11.735 17:23:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.735 17:23:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.735 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:14:11.735 17:23:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.735 17:23:31 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:11.735 17:23:31 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.735 17:23:31 -- target/multitarget.sh@21 -- # jq length 00:14:11.995 17:23:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:11.995 17:23:31 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:11.995 "nvmf_tgt_1" 00:14:11.995 17:23:31 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:11.995 "nvmf_tgt_2" 00:14:12.254 17:23:31 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:12.254 17:23:31 -- target/multitarget.sh@28 -- # jq length 00:14:12.254 17:23:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:12.254 17:23:31 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:12.254 true 00:14:12.254 17:23:31 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:12.513 true 00:14:12.513 17:23:32 -- target/multitarget.sh@35 -- # jq length 00:14:12.513 17:23:32 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:12.513 17:23:32 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:12.513 17:23:32 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:12.513 17:23:32 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:12.513 17:23:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:12.513 17:23:32 -- nvmf/common.sh@116 -- # sync 00:14:12.513 17:23:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:12.513 17:23:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:12.513 17:23:32 -- nvmf/common.sh@119 -- # set +e 00:14:12.513 17:23:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:12.513 17:23:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:12.513 rmmod nvme_rdma 00:14:12.513 rmmod nvme_fabrics 00:14:12.513 17:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:12.513 17:23:32 -- nvmf/common.sh@123 -- # set -e 00:14:12.513 17:23:32 -- nvmf/common.sh@124 -- # return 0 00:14:12.513 17:23:32 -- nvmf/common.sh@477 -- # '[' -n 2630589 ']' 00:14:12.513 17:23:32 -- nvmf/common.sh@478 -- # killprocess 2630589 00:14:12.513 17:23:32 -- common/autotest_common.sh@936 -- # '[' -z 2630589 ']' 00:14:12.513 17:23:32 -- common/autotest_common.sh@940 -- # kill -0 2630589 00:14:12.513 17:23:32 -- common/autotest_common.sh@941 -- # uname 00:14:12.513 17:23:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.513 17:23:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2630589 00:14:12.773 17:23:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:12.773 17:23:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:12.773 17:23:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2630589' 00:14:12.773 killing process with pid 2630589 00:14:12.773 17:23:32 -- common/autotest_common.sh@955 -- # kill 2630589 00:14:12.773 17:23:32 -- common/autotest_common.sh@960 -- # wait 2630589 00:14:12.773 17:23:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.773 17:23:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:12.773 00:14:12.773 real 0m8.576s 00:14:12.773 user 0m9.736s 00:14:12.773 sys 0m5.392s 00:14:12.773 17:23:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:12.773 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:14:12.773 ************************************ 00:14:12.773 END TEST nvmf_multitarget 00:14:12.773 ************************************ 00:14:13.033 17:23:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:13.033 17:23:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.033 17:23:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.033 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:14:13.033 ************************************ 00:14:13.033 START TEST nvmf_rpc 00:14:13.033 ************************************ 00:14:13.033 17:23:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:13.033 * Looking for test storage... 00:14:13.033 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:13.033 17:23:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:13.033 17:23:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:13.033 17:23:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:13.033 17:23:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:13.033 17:23:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:13.033 17:23:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:13.033 17:23:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:13.033 17:23:32 -- scripts/common.sh@335 -- # IFS=.-: 00:14:13.033 17:23:32 -- scripts/common.sh@335 -- # read -ra ver1 00:14:13.033 17:23:32 -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.033 17:23:32 -- scripts/common.sh@336 -- # read -ra ver2 00:14:13.033 17:23:32 -- scripts/common.sh@337 -- # local 'op=<' 00:14:13.033 17:23:32 -- scripts/common.sh@339 -- # ver1_l=2 00:14:13.033 17:23:32 -- scripts/common.sh@340 -- # ver2_l=1 00:14:13.033 17:23:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:13.033 17:23:32 -- scripts/common.sh@343 -- # case "$op" in 00:14:13.033 17:23:32 -- scripts/common.sh@344 -- # : 1 00:14:13.033 17:23:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:13.033 17:23:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.033 17:23:32 -- scripts/common.sh@364 -- # decimal 1 00:14:13.033 17:23:32 -- scripts/common.sh@352 -- # local d=1 00:14:13.033 17:23:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.033 17:23:32 -- scripts/common.sh@354 -- # echo 1 00:14:13.033 17:23:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:13.033 17:23:32 -- scripts/common.sh@365 -- # decimal 2 00:14:13.033 17:23:32 -- scripts/common.sh@352 -- # local d=2 00:14:13.033 17:23:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.033 17:23:32 -- scripts/common.sh@354 -- # echo 2 00:14:13.033 17:23:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:13.033 17:23:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:13.033 17:23:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:13.033 17:23:32 -- scripts/common.sh@367 -- # return 0 00:14:13.033 17:23:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.033 17:23:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:13.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.033 --rc genhtml_branch_coverage=1 00:14:13.033 --rc genhtml_function_coverage=1 00:14:13.033 --rc genhtml_legend=1 00:14:13.033 --rc geninfo_all_blocks=1 00:14:13.033 --rc geninfo_unexecuted_blocks=1 00:14:13.033 00:14:13.033 ' 00:14:13.033 17:23:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:13.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.034 --rc genhtml_branch_coverage=1 00:14:13.034 --rc genhtml_function_coverage=1 00:14:13.034 --rc genhtml_legend=1 00:14:13.034 --rc geninfo_all_blocks=1 00:14:13.034 --rc geninfo_unexecuted_blocks=1 00:14:13.034 00:14:13.034 ' 00:14:13.034 17:23:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:13.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.034 --rc genhtml_branch_coverage=1 00:14:13.034 --rc genhtml_function_coverage=1 00:14:13.034 --rc genhtml_legend=1 00:14:13.034 --rc geninfo_all_blocks=1 00:14:13.034 --rc geninfo_unexecuted_blocks=1 00:14:13.034 00:14:13.034 ' 00:14:13.034 17:23:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:13.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.034 --rc genhtml_branch_coverage=1 00:14:13.034 --rc genhtml_function_coverage=1 00:14:13.034 --rc genhtml_legend=1 00:14:13.034 --rc geninfo_all_blocks=1 00:14:13.034 --rc geninfo_unexecuted_blocks=1 00:14:13.034 00:14:13.034 ' 00:14:13.034 17:23:32 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.034 17:23:32 -- nvmf/common.sh@7 -- # uname -s 00:14:13.034 17:23:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.034 17:23:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.034 17:23:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.034 17:23:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.034 17:23:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.034 17:23:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.034 17:23:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.034 17:23:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.034 17:23:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.034 17:23:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.034 17:23:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:13.034 17:23:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:13.034 17:23:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.034 17:23:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.034 17:23:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.034 17:23:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:13.034 17:23:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.034 17:23:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.034 17:23:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.034 17:23:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.034 17:23:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.034 17:23:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.034 17:23:32 -- paths/export.sh@5 -- # export PATH 00:14:13.034 17:23:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.034 17:23:32 -- nvmf/common.sh@46 -- # : 0 00:14:13.034 17:23:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:13.034 17:23:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:13.034 17:23:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:13.034 17:23:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.034 17:23:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.034 17:23:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:13.034 17:23:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:13.034 17:23:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:13.034 17:23:32 -- target/rpc.sh@11 -- # loops=5 00:14:13.034 17:23:32 -- target/rpc.sh@23 -- # nvmftestinit 00:14:13.034 17:23:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:13.034 17:23:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.034 17:23:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:13.034 17:23:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:13.034 17:23:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:13.034 17:23:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.034 17:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.034 17:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.034 17:23:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:13.034 17:23:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:13.034 17:23:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:13.034 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.606 17:23:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.606 17:23:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.606 17:23:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.606 17:23:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.606 17:23:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.606 17:23:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.606 17:23:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.606 17:23:39 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.607 17:23:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.607 17:23:39 -- nvmf/common.sh@295 -- # e810=() 00:14:19.607 17:23:39 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.607 17:23:39 -- nvmf/common.sh@296 -- # x722=() 00:14:19.607 17:23:39 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.607 17:23:39 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.607 17:23:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.607 17:23:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.607 17:23:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:19.607 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:19.607 17:23:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:19.607 17:23:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:19.607 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:19.607 17:23:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:19.607 17:23:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.607 17:23:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.607 17:23:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:19.607 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:19.607 17:23:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.607 17:23:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.607 17:23:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:19.607 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:19.607 17:23:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.607 17:23:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.607 17:23:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:19.607 17:23:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:19.607 17:23:39 -- nvmf/common.sh@57 -- # uname 00:14:19.607 17:23:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:19.607 17:23:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:19.607 17:23:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:19.607 17:23:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:19.607 17:23:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:19.607 17:23:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:19.607 17:23:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:19.607 17:23:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:19.607 17:23:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:19.607 17:23:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:19.607 17:23:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:19.607 17:23:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:19.607 17:23:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:19.607 17:23:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:19.607 17:23:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:19.607 17:23:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:19.607 17:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:19.607 17:23:39 -- nvmf/common.sh@104 -- # continue 2 00:14:19.607 17:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.607 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:19.607 17:23:39 -- nvmf/common.sh@104 -- # continue 2 00:14:19.607 17:23:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:19.607 17:23:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:19.607 17:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:19.607 17:23:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:19.607 17:23:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:19.607 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:19.607 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:19.607 altname enp217s0f0np0 00:14:19.607 altname ens818f0np0 00:14:19.607 inet 192.168.100.8/24 scope global mlx_0_0 00:14:19.607 valid_lft forever preferred_lft forever 00:14:19.607 17:23:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:19.607 17:23:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:19.607 17:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:19.607 17:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:19.607 17:23:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:19.607 17:23:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:19.607 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:19.607 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:19.607 altname enp217s0f1np1 00:14:19.607 altname ens818f1np1 00:14:19.607 inet 192.168.100.9/24 scope global mlx_0_1 00:14:19.607 valid_lft forever preferred_lft forever 00:14:19.607 17:23:39 -- nvmf/common.sh@410 -- # return 0 00:14:19.607 17:23:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.607 17:23:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:19.607 17:23:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:19.607 17:23:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:19.607 17:23:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:19.607 17:23:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:19.607 17:23:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:19.607 17:23:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:19.607 17:23:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:19.867 17:23:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:19.867 17:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:19.867 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.867 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:19.867 17:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:19.867 17:23:39 -- nvmf/common.sh@104 -- # continue 2 00:14:19.867 17:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:19.867 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.867 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:19.867 17:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.867 17:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:19.867 17:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:19.867 17:23:39 -- nvmf/common.sh@104 -- # continue 2 00:14:19.867 17:23:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:19.867 17:23:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:19.867 17:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:19.867 17:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:19.867 17:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:19.867 17:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:19.868 17:23:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:19.868 17:23:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:19.868 17:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:19.868 17:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:19.868 17:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:19.868 17:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:19.868 17:23:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:19.868 192.168.100.9' 00:14:19.868 17:23:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:19.868 192.168.100.9' 00:14:19.868 17:23:39 -- nvmf/common.sh@445 -- # head -n 1 00:14:19.868 17:23:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:19.868 17:23:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:19.868 192.168.100.9' 00:14:19.868 17:23:39 -- nvmf/common.sh@446 -- # tail -n +2 00:14:19.868 17:23:39 -- nvmf/common.sh@446 -- # head -n 1 00:14:19.868 17:23:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:19.868 17:23:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:19.868 17:23:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:19.868 17:23:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:19.868 17:23:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:19.868 17:23:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:19.868 17:23:39 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:19.868 17:23:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.868 17:23:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.868 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:14:19.868 17:23:39 -- nvmf/common.sh@469 -- # nvmfpid=2634218 00:14:19.868 17:23:39 -- nvmf/common.sh@470 -- # waitforlisten 2634218 00:14:19.868 17:23:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.868 17:23:39 -- common/autotest_common.sh@829 -- # '[' -z 2634218 ']' 00:14:19.868 17:23:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.868 17:23:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.868 17:23:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.868 17:23:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.868 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:14:19.868 [2024-11-09 17:23:39.501619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:19.868 [2024-11-09 17:23:39.501662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.868 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.868 [2024-11-09 17:23:39.569912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.126 [2024-11-09 17:23:39.644157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.126 [2024-11-09 17:23:39.644260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.126 [2024-11-09 17:23:39.644271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.126 [2024-11-09 17:23:39.644280] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.126 [2024-11-09 17:23:39.644325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.126 [2024-11-09 17:23:39.644417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.126 [2024-11-09 17:23:39.644501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.126 [2024-11-09 17:23:39.644504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.693 17:23:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.693 17:23:40 -- common/autotest_common.sh@862 -- # return 0 00:14:20.693 17:23:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.693 17:23:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.693 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:20.693 17:23:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.693 17:23:40 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:20.693 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.693 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:20.693 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.693 17:23:40 -- target/rpc.sh@26 -- # stats='{ 00:14:20.693 "tick_rate": 2500000000, 00:14:20.693 "poll_groups": [ 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_0", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 0, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 0, 00:14:20.693 "transports": [] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_1", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 0, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 0, 00:14:20.693 "transports": [] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_2", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 0, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 0, 00:14:20.693 "transports": [] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_3", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 0, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 0, 00:14:20.693 "transports": [] 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 }' 00:14:20.693 17:23:40 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:20.693 17:23:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:20.693 17:23:40 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:20.693 17:23:40 -- target/rpc.sh@15 -- # wc -l 00:14:20.693 17:23:40 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:20.693 17:23:40 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:20.952 17:23:40 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:20.952 17:23:40 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:20.952 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.952 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:20.952 [2024-11-09 17:23:40.505268] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9d70a0/0x9db590) succeed. 00:14:20.952 [2024-11-09 17:23:40.514485] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9d8690/0xa1cc30) succeed. 00:14:20.952 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.952 17:23:40 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:20.952 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.952 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:20.952 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.952 17:23:40 -- target/rpc.sh@33 -- # stats='{ 00:14:20.952 "tick_rate": 2500000000, 00:14:20.952 "poll_groups": [ 00:14:20.952 { 00:14:20.952 "name": "nvmf_tgt_poll_group_0", 00:14:20.952 "admin_qpairs": 0, 00:14:20.952 "io_qpairs": 0, 00:14:20.952 "current_admin_qpairs": 0, 00:14:20.952 "current_io_qpairs": 0, 00:14:20.952 "pending_bdev_io": 0, 00:14:20.952 "completed_nvme_io": 0, 00:14:20.952 "transports": [ 00:14:20.952 { 00:14:20.952 "trtype": "RDMA", 00:14:20.952 "pending_data_buffer": 0, 00:14:20.952 "devices": [ 00:14:20.952 { 00:14:20.952 "name": "mlx5_0", 00:14:20.952 "polls": 15940, 00:14:20.952 "idle_polls": 15940, 00:14:20.952 "completions": 0, 00:14:20.952 "requests": 0, 00:14:20.952 "request_latency": 0, 00:14:20.952 "pending_free_request": 0, 00:14:20.952 "pending_rdma_read": 0, 00:14:20.952 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "mlx5_1", 00:14:20.953 "polls": 15940, 00:14:20.953 "idle_polls": 15940, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "nvmf_tgt_poll_group_1", 00:14:20.953 "admin_qpairs": 0, 00:14:20.953 "io_qpairs": 0, 00:14:20.953 "current_admin_qpairs": 0, 00:14:20.953 "current_io_qpairs": 0, 00:14:20.953 "pending_bdev_io": 0, 00:14:20.953 "completed_nvme_io": 0, 00:14:20.953 "transports": [ 00:14:20.953 { 00:14:20.953 "trtype": "RDMA", 00:14:20.953 "pending_data_buffer": 0, 00:14:20.953 "devices": [ 00:14:20.953 { 00:14:20.953 "name": "mlx5_0", 00:14:20.953 "polls": 10284, 00:14:20.953 "idle_polls": 10284, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "mlx5_1", 00:14:20.953 "polls": 10284, 00:14:20.953 "idle_polls": 10284, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "nvmf_tgt_poll_group_2", 00:14:20.953 "admin_qpairs": 0, 00:14:20.953 "io_qpairs": 0, 00:14:20.953 "current_admin_qpairs": 0, 00:14:20.953 "current_io_qpairs": 0, 00:14:20.953 "pending_bdev_io": 0, 00:14:20.953 "completed_nvme_io": 0, 00:14:20.953 "transports": [ 00:14:20.953 { 00:14:20.953 "trtype": "RDMA", 00:14:20.953 "pending_data_buffer": 0, 00:14:20.953 "devices": [ 00:14:20.953 { 00:14:20.953 "name": "mlx5_0", 00:14:20.953 "polls": 5770, 00:14:20.953 "idle_polls": 5770, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "mlx5_1", 00:14:20.953 "polls": 5770, 00:14:20.953 "idle_polls": 5770, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "nvmf_tgt_poll_group_3", 00:14:20.953 "admin_qpairs": 0, 00:14:20.953 "io_qpairs": 0, 00:14:20.953 "current_admin_qpairs": 0, 00:14:20.953 "current_io_qpairs": 0, 00:14:20.953 "pending_bdev_io": 0, 00:14:20.953 "completed_nvme_io": 0, 00:14:20.953 "transports": [ 00:14:20.953 { 00:14:20.953 "trtype": "RDMA", 00:14:20.953 "pending_data_buffer": 0, 00:14:20.953 "devices": [ 00:14:20.953 { 00:14:20.953 "name": "mlx5_0", 00:14:20.953 "polls": 913, 00:14:20.953 "idle_polls": 913, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 }, 00:14:20.953 { 00:14:20.953 "name": "mlx5_1", 00:14:20.953 "polls": 913, 00:14:20.953 "idle_polls": 913, 00:14:20.953 "completions": 0, 00:14:20.953 "requests": 0, 00:14:20.953 "request_latency": 0, 00:14:20.953 "pending_free_request": 0, 00:14:20.953 "pending_rdma_read": 0, 00:14:20.953 "pending_rdma_write": 0, 00:14:20.953 "pending_rdma_send": 0, 00:14:20.953 "total_send_wrs": 0, 00:14:20.953 "send_doorbell_updates": 0, 00:14:20.953 "total_recv_wrs": 4096, 00:14:20.953 "recv_doorbell_updates": 1 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 } 00:14:20.953 ] 00:14:20.953 }' 00:14:20.953 17:23:40 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:20.953 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:20.953 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:20.953 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.213 17:23:40 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:21.213 17:23:40 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:21.213 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:21.213 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:21.213 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.213 17:23:40 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:21.213 17:23:40 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:21.213 17:23:40 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:21.213 17:23:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:21.213 17:23:40 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:21.213 17:23:40 -- target/rpc.sh@15 -- # wc -l 00:14:21.213 17:23:40 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:21.213 17:23:40 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:21.213 17:23:40 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:21.213 17:23:40 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:21.213 17:23:40 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:21.213 17:23:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:21.213 17:23:40 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:21.213 17:23:40 -- target/rpc.sh@15 -- # wc -l 00:14:21.213 17:23:40 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:21.213 17:23:40 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:21.213 17:23:40 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:21.213 17:23:40 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:21.213 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.213 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.213 Malloc1 00:14:21.213 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.213 17:23:40 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:21.213 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.213 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.213 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.213 17:23:40 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.213 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.213 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.213 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.213 17:23:40 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:21.213 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.213 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.213 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.213 17:23:40 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:21.213 17:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.213 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.213 [2024-11-09 17:23:40.957787] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:21.213 17:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.213 17:23:40 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:21.213 17:23:40 -- common/autotest_common.sh@650 -- # local es=0 00:14:21.213 17:23:40 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:21.213 17:23:40 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:21.213 17:23:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.213 17:23:40 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:21.213 17:23:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.213 17:23:40 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:21.213 17:23:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.213 17:23:40 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:21.213 17:23:40 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:21.213 17:23:40 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:21.473 [2024-11-09 17:23:41.003631] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:21.473 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:21.473 could not add new controller: failed to write to nvme-fabrics device 00:14:21.473 17:23:41 -- common/autotest_common.sh@653 -- # es=1 00:14:21.473 17:23:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.473 17:23:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.473 17:23:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.473 17:23:41 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:21.473 17:23:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.473 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.473 17:23:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.473 17:23:41 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:22.410 17:23:42 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.410 17:23:42 -- common/autotest_common.sh@1187 -- # local i=0 00:14:22.410 17:23:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.410 17:23:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:22.410 17:23:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:24.314 17:23:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:24.314 17:23:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:24.314 17:23:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.314 17:23:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:24.314 17:23:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.314 17:23:44 -- common/autotest_common.sh@1197 -- # return 0 00:14:24.314 17:23:44 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.251 17:23:45 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.251 17:23:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:25.251 17:23:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:25.251 17:23:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.510 17:23:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:25.510 17:23:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.510 17:23:45 -- common/autotest_common.sh@1220 -- # return 0 00:14:25.510 17:23:45 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:25.510 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.510 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:14:25.510 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.510 17:23:45 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:25.510 17:23:45 -- common/autotest_common.sh@650 -- # local es=0 00:14:25.510 17:23:45 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:25.510 17:23:45 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:25.510 17:23:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.510 17:23:45 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:25.510 17:23:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.510 17:23:45 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:25.510 17:23:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.510 17:23:45 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:25.510 17:23:45 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:25.510 17:23:45 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:25.510 [2024-11-09 17:23:45.095640] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:25.510 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:25.510 could not add new controller: failed to write to nvme-fabrics device 00:14:25.510 17:23:45 -- common/autotest_common.sh@653 -- # es=1 00:14:25.510 17:23:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.510 17:23:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.510 17:23:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.510 17:23:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:25.510 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.510 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:14:25.510 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.510 17:23:45 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:26.448 17:23:46 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.448 17:23:46 -- common/autotest_common.sh@1187 -- # local i=0 00:14:26.448 17:23:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.448 17:23:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:26.448 17:23:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:28.982 17:23:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:28.982 17:23:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:28.982 17:23:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.982 17:23:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:28.982 17:23:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.982 17:23:48 -- common/autotest_common.sh@1197 -- # return 0 00:14:28.982 17:23:48 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.550 17:23:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.550 17:23:49 -- common/autotest_common.sh@1208 -- # local i=0 00:14:29.550 17:23:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:29.550 17:23:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.550 17:23:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:29.550 17:23:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.550 17:23:49 -- common/autotest_common.sh@1220 -- # return 0 00:14:29.550 17:23:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.550 17:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.550 17:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.550 17:23:49 -- target/rpc.sh@81 -- # seq 1 5 00:14:29.550 17:23:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:29.550 17:23:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.550 17:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.550 17:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.550 17:23:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:29.550 17:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.550 [2024-11-09 17:23:49.191962] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:29.550 17:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.550 17:23:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:29.550 17:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.550 17:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.550 17:23:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.550 17:23:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.550 17:23:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.550 17:23:49 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:30.485 17:23:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.485 17:23:50 -- common/autotest_common.sh@1187 -- # local i=0 00:14:30.485 17:23:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.485 17:23:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:30.485 17:23:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:33.019 17:23:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:33.019 17:23:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:33.019 17:23:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.019 17:23:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:33.019 17:23:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.019 17:23:52 -- common/autotest_common.sh@1197 -- # return 0 00:14:33.019 17:23:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.587 17:23:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.587 17:23:53 -- common/autotest_common.sh@1208 -- # local i=0 00:14:33.587 17:23:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:33.587 17:23:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.587 17:23:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:33.587 17:23:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.588 17:23:53 -- common/autotest_common.sh@1220 -- # return 0 00:14:33.588 17:23:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:33.588 17:23:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 [2024-11-09 17:23:53.242125] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.588 17:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 17:23:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 17:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 17:23:53 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:34.630 17:23:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.631 17:23:54 -- common/autotest_common.sh@1187 -- # local i=0 00:14:34.631 17:23:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.631 17:23:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:34.631 17:23:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:36.535 17:23:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:36.535 17:23:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:36.535 17:23:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.535 17:23:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:36.535 17:23:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.535 17:23:56 -- common/autotest_common.sh@1197 -- # return 0 00:14:36.535 17:23:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.472 17:23:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.472 17:23:57 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.472 17:23:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.472 17:23:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.731 17:23:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.731 17:23:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.731 17:23:57 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.731 17:23:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:37.731 17:23:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 [2024-11-09 17:23:57.292763] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.731 17:23:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.731 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 17:23:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.731 17:23:57 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.668 17:23:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.668 17:23:58 -- common/autotest_common.sh@1187 -- # local i=0 00:14:38.668 17:23:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.668 17:23:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:38.668 17:23:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:40.575 17:24:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:40.575 17:24:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:40.575 17:24:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.575 17:24:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:40.575 17:24:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.575 17:24:00 -- common/autotest_common.sh@1197 -- # return 0 00:14:40.575 17:24:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.511 17:24:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.511 17:24:01 -- common/autotest_common.sh@1208 -- # local i=0 00:14:41.511 17:24:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:41.511 17:24:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.770 17:24:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.770 17:24:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:41.770 17:24:01 -- common/autotest_common.sh@1220 -- # return 0 00:14:41.770 17:24:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.770 17:24:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 [2024-11-09 17:24:01.347205] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.770 17:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.770 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 17:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.770 17:24:01 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:42.707 17:24:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.707 17:24:02 -- common/autotest_common.sh@1187 -- # local i=0 00:14:42.707 17:24:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.707 17:24:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:42.707 17:24:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:44.613 17:24:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:44.613 17:24:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:44.613 17:24:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.613 17:24:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:44.613 17:24:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.613 17:24:04 -- common/autotest_common.sh@1197 -- # return 0 00:14:44.613 17:24:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.550 17:24:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.550 17:24:05 -- common/autotest_common.sh@1208 -- # local i=0 00:14:45.550 17:24:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:45.550 17:24:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.810 17:24:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:45.810 17:24:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.810 17:24:05 -- common/autotest_common.sh@1220 -- # return 0 00:14:45.810 17:24:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:45.810 17:24:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 [2024-11-09 17:24:05.373682] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.810 17:24:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.810 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.810 17:24:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.810 17:24:05 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:46.747 17:24:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.747 17:24:06 -- common/autotest_common.sh@1187 -- # local i=0 00:14:46.747 17:24:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.747 17:24:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:46.747 17:24:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:48.653 17:24:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:48.653 17:24:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:48.653 17:24:08 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.653 17:24:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:48.653 17:24:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.653 17:24:08 -- common/autotest_common.sh@1197 -- # return 0 00:14:48.653 17:24:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.591 17:24:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.591 17:24:09 -- common/autotest_common.sh@1208 -- # local i=0 00:14:49.591 17:24:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:49.591 17:24:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.591 17:24:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:49.591 17:24:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.851 17:24:09 -- common/autotest_common.sh@1220 -- # return 0 00:14:49.851 17:24:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@99 -- # seq 1 5 00:14:49.851 17:24:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:49.851 17:24:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 [2024-11-09 17:24:09.420120] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:49.851 17:24:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 [2024-11-09 17:24:09.472287] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:49.851 17:24:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 [2024-11-09 17:24:09.520450] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:49.851 17:24:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 [2024-11-09 17:24:09.568668] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.851 17:24:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.851 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.851 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.852 17:24:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.852 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.852 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.852 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.852 17:24:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.852 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.852 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.852 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.852 17:24:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.852 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.852 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.852 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.852 17:24:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:49.852 17:24:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.852 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.852 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:49.852 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.852 17:24:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.852 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.852 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 [2024-11-09 17:24:09.620797] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.110 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.110 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.110 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.110 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.110 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.110 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.110 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.110 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:50.110 17:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.110 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 17:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.110 17:24:09 -- target/rpc.sh@110 -- # stats='{ 00:14:50.110 "tick_rate": 2500000000, 00:14:50.110 "poll_groups": [ 00:14:50.110 { 00:14:50.110 "name": "nvmf_tgt_poll_group_0", 00:14:50.110 "admin_qpairs": 2, 00:14:50.110 "io_qpairs": 27, 00:14:50.110 "current_admin_qpairs": 0, 00:14:50.110 "current_io_qpairs": 0, 00:14:50.110 "pending_bdev_io": 0, 00:14:50.110 "completed_nvme_io": 126, 00:14:50.110 "transports": [ 00:14:50.110 { 00:14:50.110 "trtype": "RDMA", 00:14:50.110 "pending_data_buffer": 0, 00:14:50.110 "devices": [ 00:14:50.110 { 00:14:50.110 "name": "mlx5_0", 00:14:50.110 "polls": 3562903, 00:14:50.110 "idle_polls": 3562581, 00:14:50.110 "completions": 363, 00:14:50.110 "requests": 181, 00:14:50.110 "request_latency": 34297240, 00:14:50.110 "pending_free_request": 0, 00:14:50.110 "pending_rdma_read": 0, 00:14:50.110 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 306, 00:14:50.111 "send_doorbell_updates": 160, 00:14:50.111 "total_recv_wrs": 4277, 00:14:50.111 "recv_doorbell_updates": 160 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "mlx5_1", 00:14:50.111 "polls": 3562903, 00:14:50.111 "idle_polls": 3562903, 00:14:50.111 "completions": 0, 00:14:50.111 "requests": 0, 00:14:50.111 "request_latency": 0, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 0, 00:14:50.111 "send_doorbell_updates": 0, 00:14:50.111 "total_recv_wrs": 4096, 00:14:50.111 "recv_doorbell_updates": 1 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "nvmf_tgt_poll_group_1", 00:14:50.111 "admin_qpairs": 2, 00:14:50.111 "io_qpairs": 26, 00:14:50.111 "current_admin_qpairs": 0, 00:14:50.111 "current_io_qpairs": 0, 00:14:50.111 "pending_bdev_io": 0, 00:14:50.111 "completed_nvme_io": 78, 00:14:50.111 "transports": [ 00:14:50.111 { 00:14:50.111 "trtype": "RDMA", 00:14:50.111 "pending_data_buffer": 0, 00:14:50.111 "devices": [ 00:14:50.111 { 00:14:50.111 "name": "mlx5_0", 00:14:50.111 "polls": 3513102, 00:14:50.111 "idle_polls": 3512860, 00:14:50.111 "completions": 262, 00:14:50.111 "requests": 131, 00:14:50.111 "request_latency": 21736470, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 208, 00:14:50.111 "send_doorbell_updates": 119, 00:14:50.111 "total_recv_wrs": 4227, 00:14:50.111 "recv_doorbell_updates": 120 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "mlx5_1", 00:14:50.111 "polls": 3513102, 00:14:50.111 "idle_polls": 3513102, 00:14:50.111 "completions": 0, 00:14:50.111 "requests": 0, 00:14:50.111 "request_latency": 0, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 0, 00:14:50.111 "send_doorbell_updates": 0, 00:14:50.111 "total_recv_wrs": 4096, 00:14:50.111 "recv_doorbell_updates": 1 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "nvmf_tgt_poll_group_2", 00:14:50.111 "admin_qpairs": 1, 00:14:50.111 "io_qpairs": 26, 00:14:50.111 "current_admin_qpairs": 0, 00:14:50.111 "current_io_qpairs": 0, 00:14:50.111 "pending_bdev_io": 0, 00:14:50.111 "completed_nvme_io": 126, 00:14:50.111 "transports": [ 00:14:50.111 { 00:14:50.111 "trtype": "RDMA", 00:14:50.111 "pending_data_buffer": 0, 00:14:50.111 "devices": [ 00:14:50.111 { 00:14:50.111 "name": "mlx5_0", 00:14:50.111 "polls": 3567378, 00:14:50.111 "idle_polls": 3567109, 00:14:50.111 "completions": 307, 00:14:50.111 "requests": 153, 00:14:50.111 "request_latency": 32830920, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 266, 00:14:50.111 "send_doorbell_updates": 130, 00:14:50.111 "total_recv_wrs": 4249, 00:14:50.111 "recv_doorbell_updates": 130 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "mlx5_1", 00:14:50.111 "polls": 3567378, 00:14:50.111 "idle_polls": 3567378, 00:14:50.111 "completions": 0, 00:14:50.111 "requests": 0, 00:14:50.111 "request_latency": 0, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 0, 00:14:50.111 "send_doorbell_updates": 0, 00:14:50.111 "total_recv_wrs": 4096, 00:14:50.111 "recv_doorbell_updates": 1 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "nvmf_tgt_poll_group_3", 00:14:50.111 "admin_qpairs": 2, 00:14:50.111 "io_qpairs": 26, 00:14:50.111 "current_admin_qpairs": 0, 00:14:50.111 "current_io_qpairs": 0, 00:14:50.111 "pending_bdev_io": 0, 00:14:50.111 "completed_nvme_io": 125, 00:14:50.111 "transports": [ 00:14:50.111 { 00:14:50.111 "trtype": "RDMA", 00:14:50.111 "pending_data_buffer": 0, 00:14:50.111 "devices": [ 00:14:50.111 { 00:14:50.111 "name": "mlx5_0", 00:14:50.111 "polls": 2749639, 00:14:50.111 "idle_polls": 2749325, 00:14:50.111 "completions": 356, 00:14:50.111 "requests": 178, 00:14:50.111 "request_latency": 39503654, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 302, 00:14:50.111 "send_doorbell_updates": 153, 00:14:50.111 "total_recv_wrs": 4274, 00:14:50.111 "recv_doorbell_updates": 154 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "mlx5_1", 00:14:50.111 "polls": 2749639, 00:14:50.111 "idle_polls": 2749639, 00:14:50.111 "completions": 0, 00:14:50.111 "requests": 0, 00:14:50.111 "request_latency": 0, 00:14:50.111 "pending_free_request": 0, 00:14:50.111 "pending_rdma_read": 0, 00:14:50.111 "pending_rdma_write": 0, 00:14:50.111 "pending_rdma_send": 0, 00:14:50.111 "total_send_wrs": 0, 00:14:50.111 "send_doorbell_updates": 0, 00:14:50.111 "total_recv_wrs": 4096, 00:14:50.111 "recv_doorbell_updates": 1 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }' 00:14:50.111 17:24:09 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:50.111 17:24:09 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:50.111 17:24:09 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:50.111 17:24:09 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:50.111 17:24:09 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:50.111 17:24:09 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:50.111 17:24:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:50.111 17:24:09 -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:14:50.111 17:24:09 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:50.111 17:24:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:50.111 17:24:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:50.111 17:24:09 -- target/rpc.sh@118 -- # (( 128368284 > 0 )) 00:14:50.111 17:24:09 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:50.111 17:24:09 -- target/rpc.sh@123 -- # nvmftestfini 00:14:50.111 17:24:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.111 17:24:09 -- nvmf/common.sh@116 -- # sync 00:14:50.111 17:24:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:50.111 17:24:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:50.373 17:24:09 -- nvmf/common.sh@119 -- # set +e 00:14:50.373 17:24:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.373 17:24:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:50.373 rmmod nvme_rdma 00:14:50.373 rmmod nvme_fabrics 00:14:50.373 17:24:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.373 17:24:09 -- nvmf/common.sh@123 -- # set -e 00:14:50.373 17:24:09 -- nvmf/common.sh@124 -- # return 0 00:14:50.374 17:24:09 -- nvmf/common.sh@477 -- # '[' -n 2634218 ']' 00:14:50.374 17:24:09 -- nvmf/common.sh@478 -- # killprocess 2634218 00:14:50.374 17:24:09 -- common/autotest_common.sh@936 -- # '[' -z 2634218 ']' 00:14:50.374 17:24:09 -- common/autotest_common.sh@940 -- # kill -0 2634218 00:14:50.374 17:24:09 -- common/autotest_common.sh@941 -- # uname 00:14:50.374 17:24:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.374 17:24:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2634218 00:14:50.374 17:24:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.374 17:24:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.374 17:24:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2634218' 00:14:50.374 killing process with pid 2634218 00:14:50.374 17:24:09 -- common/autotest_common.sh@955 -- # kill 2634218 00:14:50.374 17:24:09 -- common/autotest_common.sh@960 -- # wait 2634218 00:14:50.633 17:24:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:50.633 17:24:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:50.633 00:14:50.633 real 0m37.712s 00:14:50.633 user 2m4.326s 00:14:50.633 sys 0m6.851s 00:14:50.633 17:24:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.633 17:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:50.633 ************************************ 00:14:50.633 END TEST nvmf_rpc 00:14:50.633 ************************************ 00:14:50.633 17:24:10 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:50.633 17:24:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.633 17:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.633 17:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:50.634 ************************************ 00:14:50.634 START TEST nvmf_invalid 00:14:50.634 ************************************ 00:14:50.634 17:24:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:50.893 * Looking for test storage... 00:14:50.893 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:50.893 17:24:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:50.893 17:24:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:50.893 17:24:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:50.893 17:24:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:50.893 17:24:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:50.893 17:24:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:50.893 17:24:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:50.893 17:24:10 -- scripts/common.sh@335 -- # IFS=.-: 00:14:50.893 17:24:10 -- scripts/common.sh@335 -- # read -ra ver1 00:14:50.893 17:24:10 -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.893 17:24:10 -- scripts/common.sh@336 -- # read -ra ver2 00:14:50.893 17:24:10 -- scripts/common.sh@337 -- # local 'op=<' 00:14:50.893 17:24:10 -- scripts/common.sh@339 -- # ver1_l=2 00:14:50.893 17:24:10 -- scripts/common.sh@340 -- # ver2_l=1 00:14:50.893 17:24:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:50.893 17:24:10 -- scripts/common.sh@343 -- # case "$op" in 00:14:50.893 17:24:10 -- scripts/common.sh@344 -- # : 1 00:14:50.893 17:24:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:50.893 17:24:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.893 17:24:10 -- scripts/common.sh@364 -- # decimal 1 00:14:50.893 17:24:10 -- scripts/common.sh@352 -- # local d=1 00:14:50.893 17:24:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.893 17:24:10 -- scripts/common.sh@354 -- # echo 1 00:14:50.893 17:24:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:50.893 17:24:10 -- scripts/common.sh@365 -- # decimal 2 00:14:50.893 17:24:10 -- scripts/common.sh@352 -- # local d=2 00:14:50.893 17:24:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.893 17:24:10 -- scripts/common.sh@354 -- # echo 2 00:14:50.893 17:24:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:50.893 17:24:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:50.893 17:24:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:50.893 17:24:10 -- scripts/common.sh@367 -- # return 0 00:14:50.893 17:24:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.893 17:24:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:50.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.893 --rc genhtml_branch_coverage=1 00:14:50.893 --rc genhtml_function_coverage=1 00:14:50.893 --rc genhtml_legend=1 00:14:50.893 --rc geninfo_all_blocks=1 00:14:50.893 --rc geninfo_unexecuted_blocks=1 00:14:50.893 00:14:50.893 ' 00:14:50.893 17:24:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:50.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.893 --rc genhtml_branch_coverage=1 00:14:50.893 --rc genhtml_function_coverage=1 00:14:50.893 --rc genhtml_legend=1 00:14:50.893 --rc geninfo_all_blocks=1 00:14:50.893 --rc geninfo_unexecuted_blocks=1 00:14:50.893 00:14:50.893 ' 00:14:50.893 17:24:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:50.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.893 --rc genhtml_branch_coverage=1 00:14:50.893 --rc genhtml_function_coverage=1 00:14:50.893 --rc genhtml_legend=1 00:14:50.893 --rc geninfo_all_blocks=1 00:14:50.893 --rc geninfo_unexecuted_blocks=1 00:14:50.893 00:14:50.893 ' 00:14:50.893 17:24:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:50.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.893 --rc genhtml_branch_coverage=1 00:14:50.893 --rc genhtml_function_coverage=1 00:14:50.893 --rc genhtml_legend=1 00:14:50.893 --rc geninfo_all_blocks=1 00:14:50.893 --rc geninfo_unexecuted_blocks=1 00:14:50.893 00:14:50.893 ' 00:14:50.893 17:24:10 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.893 17:24:10 -- nvmf/common.sh@7 -- # uname -s 00:14:50.893 17:24:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.893 17:24:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.893 17:24:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.893 17:24:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.893 17:24:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.893 17:24:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.893 17:24:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.893 17:24:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.893 17:24:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.893 17:24:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.893 17:24:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:50.893 17:24:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:50.893 17:24:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.893 17:24:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.893 17:24:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.893 17:24:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:50.893 17:24:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.893 17:24:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.893 17:24:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.893 17:24:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.893 17:24:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.893 17:24:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.893 17:24:10 -- paths/export.sh@5 -- # export PATH 00:14:50.893 17:24:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.893 17:24:10 -- nvmf/common.sh@46 -- # : 0 00:14:50.893 17:24:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:50.893 17:24:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:50.893 17:24:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:50.893 17:24:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.893 17:24:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.893 17:24:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:50.893 17:24:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:50.893 17:24:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:50.893 17:24:10 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:50.893 17:24:10 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:50.893 17:24:10 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:50.893 17:24:10 -- target/invalid.sh@14 -- # target=foobar 00:14:50.893 17:24:10 -- target/invalid.sh@16 -- # RANDOM=0 00:14:50.893 17:24:10 -- target/invalid.sh@34 -- # nvmftestinit 00:14:50.893 17:24:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:50.893 17:24:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.893 17:24:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:50.893 17:24:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:50.893 17:24:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:50.893 17:24:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.893 17:24:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.893 17:24:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.893 17:24:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:50.893 17:24:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:50.893 17:24:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:50.893 17:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:57.465 17:24:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.465 17:24:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.465 17:24:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.465 17:24:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.465 17:24:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.465 17:24:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.465 17:24:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.465 17:24:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.465 17:24:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.465 17:24:16 -- nvmf/common.sh@295 -- # e810=() 00:14:57.465 17:24:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.465 17:24:16 -- nvmf/common.sh@296 -- # x722=() 00:14:57.465 17:24:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.465 17:24:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.465 17:24:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.465 17:24:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.465 17:24:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:57.465 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:57.465 17:24:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.465 17:24:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:57.465 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:57.465 17:24:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.465 17:24:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.465 17:24:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.465 17:24:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:57.465 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:57.465 17:24:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.465 17:24:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.465 17:24:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:57.465 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:57.465 17:24:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.465 17:24:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.465 17:24:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:57.465 17:24:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:57.465 17:24:16 -- nvmf/common.sh@57 -- # uname 00:14:57.465 17:24:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:57.465 17:24:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:57.465 17:24:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:57.465 17:24:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:57.465 17:24:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:57.465 17:24:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:57.465 17:24:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:57.465 17:24:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:57.465 17:24:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:57.465 17:24:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:57.465 17:24:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:57.465 17:24:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.465 17:24:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:57.465 17:24:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:57.465 17:24:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.465 17:24:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:57.465 17:24:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:57.465 17:24:16 -- nvmf/common.sh@104 -- # continue 2 00:14:57.465 17:24:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.465 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:57.465 17:24:16 -- nvmf/common.sh@104 -- # continue 2 00:14:57.465 17:24:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:57.465 17:24:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:57.465 17:24:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.465 17:24:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:57.465 17:24:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:57.465 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.465 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:57.465 altname enp217s0f0np0 00:14:57.465 altname ens818f0np0 00:14:57.465 inet 192.168.100.8/24 scope global mlx_0_0 00:14:57.465 valid_lft forever preferred_lft forever 00:14:57.465 17:24:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:57.465 17:24:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:57.465 17:24:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.465 17:24:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.465 17:24:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:57.465 17:24:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:57.465 17:24:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:57.465 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.465 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:57.465 altname enp217s0f1np1 00:14:57.465 altname ens818f1np1 00:14:57.465 inet 192.168.100.9/24 scope global mlx_0_1 00:14:57.465 valid_lft forever preferred_lft forever 00:14:57.465 17:24:16 -- nvmf/common.sh@410 -- # return 0 00:14:57.465 17:24:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.466 17:24:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:57.466 17:24:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:57.466 17:24:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:57.466 17:24:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:57.466 17:24:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.466 17:24:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:57.466 17:24:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:57.466 17:24:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.466 17:24:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:57.466 17:24:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.466 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.466 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.466 17:24:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:57.466 17:24:16 -- nvmf/common.sh@104 -- # continue 2 00:14:57.466 17:24:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.466 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.466 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.466 17:24:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.466 17:24:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.466 17:24:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:57.466 17:24:16 -- nvmf/common.sh@104 -- # continue 2 00:14:57.466 17:24:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:57.466 17:24:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:57.466 17:24:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.466 17:24:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:57.466 17:24:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:57.466 17:24:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:57.466 17:24:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.466 17:24:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:57.466 192.168.100.9' 00:14:57.466 17:24:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:57.466 192.168.100.9' 00:14:57.466 17:24:17 -- nvmf/common.sh@445 -- # head -n 1 00:14:57.466 17:24:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:57.466 17:24:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:57.466 192.168.100.9' 00:14:57.466 17:24:17 -- nvmf/common.sh@446 -- # tail -n +2 00:14:57.466 17:24:17 -- nvmf/common.sh@446 -- # head -n 1 00:14:57.466 17:24:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:57.466 17:24:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:57.466 17:24:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:57.466 17:24:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:57.466 17:24:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:57.466 17:24:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:57.466 17:24:17 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:57.466 17:24:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.466 17:24:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.466 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:57.466 17:24:17 -- nvmf/common.sh@469 -- # nvmfpid=2643418 00:14:57.466 17:24:17 -- nvmf/common.sh@470 -- # waitforlisten 2643418 00:14:57.466 17:24:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.466 17:24:17 -- common/autotest_common.sh@829 -- # '[' -z 2643418 ']' 00:14:57.466 17:24:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.466 17:24:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.466 17:24:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.466 17:24:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.466 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:57.466 [2024-11-09 17:24:17.122434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:57.466 [2024-11-09 17:24:17.122492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.466 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.466 [2024-11-09 17:24:17.191964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.725 [2024-11-09 17:24:17.267854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.725 [2024-11-09 17:24:17.267959] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.725 [2024-11-09 17:24:17.267969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.725 [2024-11-09 17:24:17.267977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.725 [2024-11-09 17:24:17.268030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.726 [2024-11-09 17:24:17.268124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.726 [2024-11-09 17:24:17.268186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.726 [2024-11-09 17:24:17.268187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.294 17:24:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.294 17:24:17 -- common/autotest_common.sh@862 -- # return 0 00:14:58.294 17:24:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.294 17:24:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.294 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.294 17:24:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.294 17:24:17 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:58.294 17:24:17 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18688 00:14:58.552 [2024-11-09 17:24:18.154211] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:58.552 17:24:18 -- target/invalid.sh@40 -- # out='request: 00:14:58.552 { 00:14:58.552 "nqn": "nqn.2016-06.io.spdk:cnode18688", 00:14:58.552 "tgt_name": "foobar", 00:14:58.552 "method": "nvmf_create_subsystem", 00:14:58.552 "req_id": 1 00:14:58.552 } 00:14:58.552 Got JSON-RPC error response 00:14:58.552 response: 00:14:58.552 { 00:14:58.552 "code": -32603, 00:14:58.552 "message": "Unable to find target foobar" 00:14:58.552 }' 00:14:58.552 17:24:18 -- target/invalid.sh@41 -- # [[ request: 00:14:58.552 { 00:14:58.553 "nqn": "nqn.2016-06.io.spdk:cnode18688", 00:14:58.553 "tgt_name": "foobar", 00:14:58.553 "method": "nvmf_create_subsystem", 00:14:58.553 "req_id": 1 00:14:58.553 } 00:14:58.553 Got JSON-RPC error response 00:14:58.553 response: 00:14:58.553 { 00:14:58.553 "code": -32603, 00:14:58.553 "message": "Unable to find target foobar" 00:14:58.553 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:58.553 17:24:18 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:58.553 17:24:18 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3073 00:14:58.811 [2024-11-09 17:24:18.346938] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3073: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:58.811 17:24:18 -- target/invalid.sh@45 -- # out='request: 00:14:58.811 { 00:14:58.811 "nqn": "nqn.2016-06.io.spdk:cnode3073", 00:14:58.811 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.811 "method": "nvmf_create_subsystem", 00:14:58.811 "req_id": 1 00:14:58.811 } 00:14:58.811 Got JSON-RPC error response 00:14:58.811 response: 00:14:58.811 { 00:14:58.811 "code": -32602, 00:14:58.811 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.811 }' 00:14:58.811 17:24:18 -- target/invalid.sh@46 -- # [[ request: 00:14:58.811 { 00:14:58.811 "nqn": "nqn.2016-06.io.spdk:cnode3073", 00:14:58.811 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.811 "method": "nvmf_create_subsystem", 00:14:58.811 "req_id": 1 00:14:58.811 } 00:14:58.811 Got JSON-RPC error response 00:14:58.811 response: 00:14:58.811 { 00:14:58.811 "code": -32602, 00:14:58.811 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.811 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.811 17:24:18 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:58.811 17:24:18 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5771 00:14:58.811 [2024-11-09 17:24:18.539543] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5771: invalid model number 'SPDK_Controller' 00:14:58.811 17:24:18 -- target/invalid.sh@50 -- # out='request: 00:14:58.811 { 00:14:58.811 "nqn": "nqn.2016-06.io.spdk:cnode5771", 00:14:58.811 "model_number": "SPDK_Controller\u001f", 00:14:58.811 "method": "nvmf_create_subsystem", 00:14:58.811 "req_id": 1 00:14:58.811 } 00:14:58.811 Got JSON-RPC error response 00:14:58.811 response: 00:14:58.812 { 00:14:58.812 "code": -32602, 00:14:58.812 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.812 }' 00:14:58.812 17:24:18 -- target/invalid.sh@51 -- # [[ request: 00:14:58.812 { 00:14:58.812 "nqn": "nqn.2016-06.io.spdk:cnode5771", 00:14:58.812 "model_number": "SPDK_Controller\u001f", 00:14:58.812 "method": "nvmf_create_subsystem", 00:14:58.812 "req_id": 1 00:14:58.812 } 00:14:58.812 Got JSON-RPC error response 00:14:58.812 response: 00:14:58.812 { 00:14:58.812 "code": -32602, 00:14:58.812 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:58.812 17:24:18 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:58.812 17:24:18 -- target/invalid.sh@19 -- # local length=21 ll 00:14:58.812 17:24:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.812 17:24:18 -- target/invalid.sh@21 -- # local chars 00:14:58.812 17:24:18 -- target/invalid.sh@22 -- # local string 00:14:58.812 17:24:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.812 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.812 17:24:18 -- target/invalid.sh@25 -- # printf %x 118 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=v 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 51 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=3 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 79 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=O 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 111 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=o 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 94 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+='^' 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 111 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=o 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 96 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+='`' 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 70 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=F 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 74 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # string+=J 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.069 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.069 17:24:18 -- target/invalid.sh@25 -- # printf %x 54 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=6 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 114 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=r 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 66 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=B 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 80 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=P 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 82 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=R 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 121 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=y 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 48 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=0 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 64 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=@ 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 64 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=@ 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 83 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=S 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 90 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=Z 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # printf %x 70 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:59.070 17:24:18 -- target/invalid.sh@25 -- # string+=F 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.070 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.070 17:24:18 -- target/invalid.sh@28 -- # [[ v == \- ]] 00:14:59.070 17:24:18 -- target/invalid.sh@31 -- # echo 'v3Oo^o`FJ6rBPRy0@@SZF' 00:14:59.070 17:24:18 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v3Oo^o`FJ6rBPRy0@@SZF' nqn.2016-06.io.spdk:cnode10983 00:14:59.328 [2024-11-09 17:24:18.904763] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10983: invalid serial number 'v3Oo^o`FJ6rBPRy0@@SZF' 00:14:59.328 17:24:18 -- target/invalid.sh@54 -- # out='request: 00:14:59.328 { 00:14:59.328 "nqn": "nqn.2016-06.io.spdk:cnode10983", 00:14:59.328 "serial_number": "v3Oo^o`FJ6rBPRy0@@SZF", 00:14:59.328 "method": "nvmf_create_subsystem", 00:14:59.328 "req_id": 1 00:14:59.328 } 00:14:59.328 Got JSON-RPC error response 00:14:59.328 response: 00:14:59.328 { 00:14:59.328 "code": -32602, 00:14:59.328 "message": "Invalid SN v3Oo^o`FJ6rBPRy0@@SZF" 00:14:59.328 }' 00:14:59.328 17:24:18 -- target/invalid.sh@55 -- # [[ request: 00:14:59.328 { 00:14:59.328 "nqn": "nqn.2016-06.io.spdk:cnode10983", 00:14:59.328 "serial_number": "v3Oo^o`FJ6rBPRy0@@SZF", 00:14:59.328 "method": "nvmf_create_subsystem", 00:14:59.328 "req_id": 1 00:14:59.328 } 00:14:59.328 Got JSON-RPC error response 00:14:59.328 response: 00:14:59.328 { 00:14:59.328 "code": -32602, 00:14:59.328 "message": "Invalid SN v3Oo^o`FJ6rBPRy0@@SZF" 00:14:59.328 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:59.329 17:24:18 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:59.329 17:24:18 -- target/invalid.sh@19 -- # local length=41 ll 00:14:59.329 17:24:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:59.329 17:24:18 -- target/invalid.sh@21 -- # local chars 00:14:59.329 17:24:18 -- target/invalid.sh@22 -- # local string 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 121 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=y 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 67 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=C 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 56 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=8 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 50 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=2 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 71 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=G 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 64 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+=@ 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 63 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # string+='?' 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:18 -- target/invalid.sh@25 -- # printf %x 125 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+='}' 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 97 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=a 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 92 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+='\' 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 35 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+='#' 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 118 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=v 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 72 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=H 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 87 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=W 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 75 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=K 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 69 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=E 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 93 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=']' 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 68 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+=D 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 42 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # string+='*' 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.329 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.329 17:24:19 -- target/invalid.sh@25 -- # printf %x 68 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=D 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 70 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=F 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 34 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='"' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 85 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=U 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 63 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='?' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 62 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='>' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 91 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='[' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 42 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='*' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 122 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=z 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 50 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=2 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 39 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=\' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 51 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=3 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 84 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=T 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 92 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='\' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 42 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='*' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 62 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+='>' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 75 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=K 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 105 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=i 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 122 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=z 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 80 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=P 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 93 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=']' 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # printf %x 47 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:59.589 17:24:19 -- target/invalid.sh@25 -- # string+=/ 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.589 17:24:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.589 17:24:19 -- target/invalid.sh@28 -- # [[ y == \- ]] 00:14:59.589 17:24:19 -- target/invalid.sh@31 -- # echo 'yC82G@?}a\#vHWKE]D*DF"U?>[*z2'\''3T\*>KizP]/' 00:14:59.589 17:24:19 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'yC82G@?}a\#vHWKE]D*DF"U?>[*z2'\''3T\*>KizP]/' nqn.2016-06.io.spdk:cnode17184 00:14:59.849 [2024-11-09 17:24:19.426511] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17184: invalid model number 'yC82G@?}a\#vHWKE]D*DF"U?>[*z2'3T\*>KizP]/' 00:14:59.849 17:24:19 -- target/invalid.sh@58 -- # out='request: 00:14:59.849 { 00:14:59.849 "nqn": "nqn.2016-06.io.spdk:cnode17184", 00:14:59.849 "model_number": "yC82G@?}a\\#vHWKE]D*DF\"U?>[*z2'\''3T\\*>KizP]/", 00:14:59.849 "method": "nvmf_create_subsystem", 00:14:59.849 "req_id": 1 00:14:59.849 } 00:14:59.849 Got JSON-RPC error response 00:14:59.849 response: 00:14:59.849 { 00:14:59.849 "code": -32602, 00:14:59.849 "message": "Invalid MN yC82G@?}a\\#vHWKE]D*DF\"U?>[*z2'\''3T\\*>KizP]/" 00:14:59.849 }' 00:14:59.849 17:24:19 -- target/invalid.sh@59 -- # [[ request: 00:14:59.849 { 00:14:59.849 "nqn": "nqn.2016-06.io.spdk:cnode17184", 00:14:59.849 "model_number": "yC82G@?}a\\#vHWKE]D*DF\"U?>[*z2'3T\\*>KizP]/", 00:14:59.849 "method": "nvmf_create_subsystem", 00:14:59.849 "req_id": 1 00:14:59.849 } 00:14:59.849 Got JSON-RPC error response 00:14:59.849 response: 00:14:59.849 { 00:14:59.849 "code": -32602, 00:14:59.849 "message": "Invalid MN yC82G@?}a\\#vHWKE]D*DF\"U?>[*z2'3T\\*>KizP]/" 00:14:59.849 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:59.849 17:24:19 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:00.109 [2024-11-09 17:24:19.637149] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19f2970/0x19f6e60) succeed. 00:15:00.109 [2024-11-09 17:24:19.646246] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19f3f60/0x1a38500) succeed. 00:15:00.109 17:24:19 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:00.368 17:24:19 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:00.368 17:24:19 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:00.368 192.168.100.9' 00:15:00.368 17:24:19 -- target/invalid.sh@67 -- # head -n 1 00:15:00.368 17:24:19 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:00.368 17:24:19 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:00.627 [2024-11-09 17:24:20.150050] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:00.627 17:24:20 -- target/invalid.sh@69 -- # out='request: 00:15:00.627 { 00:15:00.627 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:00.627 "listen_address": { 00:15:00.627 "trtype": "rdma", 00:15:00.627 "traddr": "192.168.100.8", 00:15:00.627 "trsvcid": "4421" 00:15:00.627 }, 00:15:00.627 "method": "nvmf_subsystem_remove_listener", 00:15:00.627 "req_id": 1 00:15:00.627 } 00:15:00.627 Got JSON-RPC error response 00:15:00.627 response: 00:15:00.627 { 00:15:00.627 "code": -32602, 00:15:00.627 "message": "Invalid parameters" 00:15:00.627 }' 00:15:00.627 17:24:20 -- target/invalid.sh@70 -- # [[ request: 00:15:00.627 { 00:15:00.627 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:00.627 "listen_address": { 00:15:00.627 "trtype": "rdma", 00:15:00.627 "traddr": "192.168.100.8", 00:15:00.627 "trsvcid": "4421" 00:15:00.627 }, 00:15:00.627 "method": "nvmf_subsystem_remove_listener", 00:15:00.627 "req_id": 1 00:15:00.627 } 00:15:00.627 Got JSON-RPC error response 00:15:00.627 response: 00:15:00.627 { 00:15:00.627 "code": -32602, 00:15:00.627 "message": "Invalid parameters" 00:15:00.627 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:00.627 17:24:20 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9893 -i 0 00:15:00.627 [2024-11-09 17:24:20.350716] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9893: invalid cntlid range [0-65519] 00:15:00.627 17:24:20 -- target/invalid.sh@73 -- # out='request: 00:15:00.627 { 00:15:00.627 "nqn": "nqn.2016-06.io.spdk:cnode9893", 00:15:00.627 "min_cntlid": 0, 00:15:00.627 "method": "nvmf_create_subsystem", 00:15:00.627 "req_id": 1 00:15:00.627 } 00:15:00.627 Got JSON-RPC error response 00:15:00.627 response: 00:15:00.627 { 00:15:00.627 "code": -32602, 00:15:00.627 "message": "Invalid cntlid range [0-65519]" 00:15:00.627 }' 00:15:00.627 17:24:20 -- target/invalid.sh@74 -- # [[ request: 00:15:00.627 { 00:15:00.627 "nqn": "nqn.2016-06.io.spdk:cnode9893", 00:15:00.627 "min_cntlid": 0, 00:15:00.627 "method": "nvmf_create_subsystem", 00:15:00.627 "req_id": 1 00:15:00.627 } 00:15:00.627 Got JSON-RPC error response 00:15:00.627 response: 00:15:00.627 { 00:15:00.627 "code": -32602, 00:15:00.627 "message": "Invalid cntlid range [0-65519]" 00:15:00.627 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.627 17:24:20 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5634 -i 65520 00:15:00.887 [2024-11-09 17:24:20.547425] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5634: invalid cntlid range [65520-65519] 00:15:00.887 17:24:20 -- target/invalid.sh@75 -- # out='request: 00:15:00.887 { 00:15:00.887 "nqn": "nqn.2016-06.io.spdk:cnode5634", 00:15:00.887 "min_cntlid": 65520, 00:15:00.887 "method": "nvmf_create_subsystem", 00:15:00.887 "req_id": 1 00:15:00.887 } 00:15:00.887 Got JSON-RPC error response 00:15:00.887 response: 00:15:00.887 { 00:15:00.887 "code": -32602, 00:15:00.887 "message": "Invalid cntlid range [65520-65519]" 00:15:00.887 }' 00:15:00.887 17:24:20 -- target/invalid.sh@76 -- # [[ request: 00:15:00.887 { 00:15:00.887 "nqn": "nqn.2016-06.io.spdk:cnode5634", 00:15:00.887 "min_cntlid": 65520, 00:15:00.887 "method": "nvmf_create_subsystem", 00:15:00.887 "req_id": 1 00:15:00.887 } 00:15:00.887 Got JSON-RPC error response 00:15:00.887 response: 00:15:00.887 { 00:15:00.887 "code": -32602, 00:15:00.887 "message": "Invalid cntlid range [65520-65519]" 00:15:00.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.887 17:24:20 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18670 -I 0 00:15:01.146 [2024-11-09 17:24:20.744139] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18670: invalid cntlid range [1-0] 00:15:01.146 17:24:20 -- target/invalid.sh@77 -- # out='request: 00:15:01.146 { 00:15:01.146 "nqn": "nqn.2016-06.io.spdk:cnode18670", 00:15:01.146 "max_cntlid": 0, 00:15:01.146 "method": "nvmf_create_subsystem", 00:15:01.146 "req_id": 1 00:15:01.146 } 00:15:01.146 Got JSON-RPC error response 00:15:01.146 response: 00:15:01.146 { 00:15:01.146 "code": -32602, 00:15:01.146 "message": "Invalid cntlid range [1-0]" 00:15:01.146 }' 00:15:01.146 17:24:20 -- target/invalid.sh@78 -- # [[ request: 00:15:01.146 { 00:15:01.146 "nqn": "nqn.2016-06.io.spdk:cnode18670", 00:15:01.146 "max_cntlid": 0, 00:15:01.146 "method": "nvmf_create_subsystem", 00:15:01.146 "req_id": 1 00:15:01.146 } 00:15:01.146 Got JSON-RPC error response 00:15:01.146 response: 00:15:01.146 { 00:15:01.146 "code": -32602, 00:15:01.146 "message": "Invalid cntlid range [1-0]" 00:15:01.146 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.146 17:24:20 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23814 -I 65520 00:15:01.559 [2024-11-09 17:24:20.932838] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23814: invalid cntlid range [1-65520] 00:15:01.559 17:24:20 -- target/invalid.sh@79 -- # out='request: 00:15:01.559 { 00:15:01.559 "nqn": "nqn.2016-06.io.spdk:cnode23814", 00:15:01.559 "max_cntlid": 65520, 00:15:01.559 "method": "nvmf_create_subsystem", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "Invalid cntlid range [1-65520]" 00:15:01.559 }' 00:15:01.559 17:24:20 -- target/invalid.sh@80 -- # [[ request: 00:15:01.559 { 00:15:01.559 "nqn": "nqn.2016-06.io.spdk:cnode23814", 00:15:01.559 "max_cntlid": 65520, 00:15:01.559 "method": "nvmf_create_subsystem", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "Invalid cntlid range [1-65520]" 00:15:01.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.559 17:24:20 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode706 -i 6 -I 5 00:15:01.559 [2024-11-09 17:24:21.129553] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode706: invalid cntlid range [6-5] 00:15:01.559 17:24:21 -- target/invalid.sh@83 -- # out='request: 00:15:01.559 { 00:15:01.559 "nqn": "nqn.2016-06.io.spdk:cnode706", 00:15:01.559 "min_cntlid": 6, 00:15:01.559 "max_cntlid": 5, 00:15:01.559 "method": "nvmf_create_subsystem", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "Invalid cntlid range [6-5]" 00:15:01.559 }' 00:15:01.559 17:24:21 -- target/invalid.sh@84 -- # [[ request: 00:15:01.559 { 00:15:01.559 "nqn": "nqn.2016-06.io.spdk:cnode706", 00:15:01.559 "min_cntlid": 6, 00:15:01.559 "max_cntlid": 5, 00:15:01.559 "method": "nvmf_create_subsystem", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "Invalid cntlid range [6-5]" 00:15:01.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.559 17:24:21 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:01.559 17:24:21 -- target/invalid.sh@87 -- # out='request: 00:15:01.559 { 00:15:01.559 "name": "foobar", 00:15:01.559 "method": "nvmf_delete_target", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:01.559 }' 00:15:01.559 17:24:21 -- target/invalid.sh@88 -- # [[ request: 00:15:01.559 { 00:15:01.559 "name": "foobar", 00:15:01.559 "method": "nvmf_delete_target", 00:15:01.559 "req_id": 1 00:15:01.559 } 00:15:01.559 Got JSON-RPC error response 00:15:01.559 response: 00:15:01.559 { 00:15:01.559 "code": -32602, 00:15:01.559 "message": "The specified target doesn't exist, cannot delete it." 00:15:01.559 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:01.559 17:24:21 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:01.559 17:24:21 -- target/invalid.sh@91 -- # nvmftestfini 00:15:01.559 17:24:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.559 17:24:21 -- nvmf/common.sh@116 -- # sync 00:15:01.559 17:24:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:01.559 17:24:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:01.559 17:24:21 -- nvmf/common.sh@119 -- # set +e 00:15:01.559 17:24:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.559 17:24:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:01.559 rmmod nvme_rdma 00:15:01.559 rmmod nvme_fabrics 00:15:01.845 17:24:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.845 17:24:21 -- nvmf/common.sh@123 -- # set -e 00:15:01.845 17:24:21 -- nvmf/common.sh@124 -- # return 0 00:15:01.845 17:24:21 -- nvmf/common.sh@477 -- # '[' -n 2643418 ']' 00:15:01.845 17:24:21 -- nvmf/common.sh@478 -- # killprocess 2643418 00:15:01.845 17:24:21 -- common/autotest_common.sh@936 -- # '[' -z 2643418 ']' 00:15:01.845 17:24:21 -- common/autotest_common.sh@940 -- # kill -0 2643418 00:15:01.845 17:24:21 -- common/autotest_common.sh@941 -- # uname 00:15:01.845 17:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.845 17:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2643418 00:15:01.845 17:24:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.845 17:24:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.845 17:24:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2643418' 00:15:01.845 killing process with pid 2643418 00:15:01.845 17:24:21 -- common/autotest_common.sh@955 -- # kill 2643418 00:15:01.845 17:24:21 -- common/autotest_common.sh@960 -- # wait 2643418 00:15:02.104 17:24:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.104 17:24:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:02.104 00:15:02.104 real 0m11.328s 00:15:02.104 user 0m21.621s 00:15:02.104 sys 0m6.206s 00:15:02.104 17:24:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.104 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:15:02.105 ************************************ 00:15:02.105 END TEST nvmf_invalid 00:15:02.105 ************************************ 00:15:02.105 17:24:21 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:02.105 17:24:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.105 17:24:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.105 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:15:02.105 ************************************ 00:15:02.105 START TEST nvmf_abort 00:15:02.105 ************************************ 00:15:02.105 17:24:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:02.105 * Looking for test storage... 00:15:02.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:02.105 17:24:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:02.105 17:24:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:02.105 17:24:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:02.105 17:24:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:02.105 17:24:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:02.105 17:24:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:02.105 17:24:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:02.105 17:24:21 -- scripts/common.sh@335 -- # IFS=.-: 00:15:02.105 17:24:21 -- scripts/common.sh@335 -- # read -ra ver1 00:15:02.105 17:24:21 -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.105 17:24:21 -- scripts/common.sh@336 -- # read -ra ver2 00:15:02.105 17:24:21 -- scripts/common.sh@337 -- # local 'op=<' 00:15:02.105 17:24:21 -- scripts/common.sh@339 -- # ver1_l=2 00:15:02.105 17:24:21 -- scripts/common.sh@340 -- # ver2_l=1 00:15:02.105 17:24:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:02.105 17:24:21 -- scripts/common.sh@343 -- # case "$op" in 00:15:02.105 17:24:21 -- scripts/common.sh@344 -- # : 1 00:15:02.105 17:24:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:02.105 17:24:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.105 17:24:21 -- scripts/common.sh@364 -- # decimal 1 00:15:02.105 17:24:21 -- scripts/common.sh@352 -- # local d=1 00:15:02.105 17:24:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.364 17:24:21 -- scripts/common.sh@354 -- # echo 1 00:15:02.364 17:24:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:02.364 17:24:21 -- scripts/common.sh@365 -- # decimal 2 00:15:02.364 17:24:21 -- scripts/common.sh@352 -- # local d=2 00:15:02.364 17:24:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.364 17:24:21 -- scripts/common.sh@354 -- # echo 2 00:15:02.364 17:24:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:02.364 17:24:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:02.364 17:24:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:02.365 17:24:21 -- scripts/common.sh@367 -- # return 0 00:15:02.365 17:24:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.365 17:24:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.365 --rc genhtml_branch_coverage=1 00:15:02.365 --rc genhtml_function_coverage=1 00:15:02.365 --rc genhtml_legend=1 00:15:02.365 --rc geninfo_all_blocks=1 00:15:02.365 --rc geninfo_unexecuted_blocks=1 00:15:02.365 00:15:02.365 ' 00:15:02.365 17:24:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.365 --rc genhtml_branch_coverage=1 00:15:02.365 --rc genhtml_function_coverage=1 00:15:02.365 --rc genhtml_legend=1 00:15:02.365 --rc geninfo_all_blocks=1 00:15:02.365 --rc geninfo_unexecuted_blocks=1 00:15:02.365 00:15:02.365 ' 00:15:02.365 17:24:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.365 --rc genhtml_branch_coverage=1 00:15:02.365 --rc genhtml_function_coverage=1 00:15:02.365 --rc genhtml_legend=1 00:15:02.365 --rc geninfo_all_blocks=1 00:15:02.365 --rc geninfo_unexecuted_blocks=1 00:15:02.365 00:15:02.365 ' 00:15:02.365 17:24:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.365 --rc genhtml_branch_coverage=1 00:15:02.365 --rc genhtml_function_coverage=1 00:15:02.365 --rc genhtml_legend=1 00:15:02.365 --rc geninfo_all_blocks=1 00:15:02.365 --rc geninfo_unexecuted_blocks=1 00:15:02.365 00:15:02.365 ' 00:15:02.365 17:24:21 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.365 17:24:21 -- nvmf/common.sh@7 -- # uname -s 00:15:02.365 17:24:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.365 17:24:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.365 17:24:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.365 17:24:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.365 17:24:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.365 17:24:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.365 17:24:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.365 17:24:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.365 17:24:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.365 17:24:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.365 17:24:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:02.365 17:24:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:02.365 17:24:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.365 17:24:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.365 17:24:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.365 17:24:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:02.365 17:24:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.365 17:24:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.365 17:24:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.365 17:24:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.365 17:24:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.365 17:24:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.365 17:24:21 -- paths/export.sh@5 -- # export PATH 00:15:02.365 17:24:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.365 17:24:21 -- nvmf/common.sh@46 -- # : 0 00:15:02.365 17:24:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.365 17:24:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.365 17:24:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.365 17:24:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.365 17:24:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.365 17:24:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.365 17:24:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.365 17:24:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.365 17:24:21 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.365 17:24:21 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:02.365 17:24:21 -- target/abort.sh@14 -- # nvmftestinit 00:15:02.365 17:24:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:02.365 17:24:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.365 17:24:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.365 17:24:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.365 17:24:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.365 17:24:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.365 17:24:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.365 17:24:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.365 17:24:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.365 17:24:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.365 17:24:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.365 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.941 17:24:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.941 17:24:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.941 17:24:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.941 17:24:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.941 17:24:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.941 17:24:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.941 17:24:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.941 17:24:28 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.941 17:24:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.941 17:24:28 -- nvmf/common.sh@295 -- # e810=() 00:15:08.941 17:24:28 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.941 17:24:28 -- nvmf/common.sh@296 -- # x722=() 00:15:08.941 17:24:28 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.941 17:24:28 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.941 17:24:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.941 17:24:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.941 17:24:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:08.941 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:08.941 17:24:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.941 17:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:08.941 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:08.941 17:24:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.941 17:24:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.941 17:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.941 17:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:08.941 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:08.941 17:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.941 17:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.941 17:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:08.941 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:08.941 17:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.941 17:24:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.941 17:24:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:08.941 17:24:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:08.941 17:24:28 -- nvmf/common.sh@57 -- # uname 00:15:08.941 17:24:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:08.941 17:24:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:08.941 17:24:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:08.941 17:24:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:08.941 17:24:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:08.941 17:24:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:08.941 17:24:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:08.941 17:24:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:08.941 17:24:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:08.941 17:24:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:08.941 17:24:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:08.941 17:24:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.941 17:24:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.941 17:24:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.941 17:24:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.941 17:24:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.941 17:24:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.941 17:24:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.941 17:24:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.941 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.941 17:24:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.941 17:24:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.941 17:24:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.942 17:24:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.942 17:24:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:08.942 17:24:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:08.942 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.942 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:08.942 altname enp217s0f0np0 00:15:08.942 altname ens818f0np0 00:15:08.942 inet 192.168.100.8/24 scope global mlx_0_0 00:15:08.942 valid_lft forever preferred_lft forever 00:15:08.942 17:24:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.942 17:24:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.942 17:24:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:08.942 17:24:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:08.942 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.942 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:08.942 altname enp217s0f1np1 00:15:08.942 altname ens818f1np1 00:15:08.942 inet 192.168.100.9/24 scope global mlx_0_1 00:15:08.942 valid_lft forever preferred_lft forever 00:15:08.942 17:24:28 -- nvmf/common.sh@410 -- # return 0 00:15:08.942 17:24:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.942 17:24:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:08.942 17:24:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:08.942 17:24:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:08.942 17:24:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.942 17:24:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.942 17:24:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.942 17:24:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.942 17:24:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.942 17:24:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.942 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.942 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.942 17:24:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.942 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.942 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.942 17:24:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.942 17:24:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.942 17:24:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.942 17:24:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.942 17:24:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.942 17:24:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.942 17:24:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.942 17:24:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:08.942 192.168.100.9' 00:15:08.942 17:24:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:08.942 192.168.100.9' 00:15:08.942 17:24:28 -- nvmf/common.sh@445 -- # head -n 1 00:15:08.942 17:24:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:08.942 17:24:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:08.942 192.168.100.9' 00:15:08.942 17:24:28 -- nvmf/common.sh@446 -- # tail -n +2 00:15:08.942 17:24:28 -- nvmf/common.sh@446 -- # head -n 1 00:15:08.942 17:24:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:08.942 17:24:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:08.942 17:24:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:08.942 17:24:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:08.942 17:24:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:08.942 17:24:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:08.942 17:24:28 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:08.942 17:24:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.942 17:24:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.942 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:08.942 17:24:28 -- nvmf/common.sh@469 -- # nvmfpid=2647811 00:15:08.942 17:24:28 -- nvmf/common.sh@470 -- # waitforlisten 2647811 00:15:08.942 17:24:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:08.942 17:24:28 -- common/autotest_common.sh@829 -- # '[' -z 2647811 ']' 00:15:08.942 17:24:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.942 17:24:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.942 17:24:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.942 17:24:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.942 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:08.942 [2024-11-09 17:24:28.672996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.942 [2024-11-09 17:24:28.673044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.202 [2024-11-09 17:24:28.740208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.202 [2024-11-09 17:24:28.806460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.202 [2024-11-09 17:24:28.806616] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.202 [2024-11-09 17:24:28.806626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.202 [2024-11-09 17:24:28.806638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.202 [2024-11-09 17:24:28.806748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.202 [2024-11-09 17:24:28.806816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.202 [2024-11-09 17:24:28.806818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.771 17:24:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.771 17:24:29 -- common/autotest_common.sh@862 -- # return 0 00:15:09.771 17:24:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:09.771 17:24:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.771 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.771 17:24:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.771 17:24:29 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:09.771 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.771 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 [2024-11-09 17:24:29.557229] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x220c860/0x2210d50) succeed. 00:15:10.030 [2024-11-09 17:24:29.566203] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x220ddb0/0x22523f0) succeed. 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 Malloc0 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 Delay0 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 [2024-11-09 17:24:29.717353] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:10.030 17:24:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.030 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:15:10.030 17:24:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.030 17:24:29 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:10.030 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.289 [2024-11-09 17:24:29.814474] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:12.194 Initializing NVMe Controllers 00:15:12.194 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:12.194 controller IO queue size 128 less than required 00:15:12.194 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:12.194 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:12.194 Initialization complete. Launching workers. 00:15:12.194 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51713 00:15:12.194 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51774, failed to submit 62 00:15:12.194 success 51713, unsuccess 61, failed 0 00:15:12.194 17:24:31 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:12.194 17:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.194 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.194 17:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.194 17:24:31 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:12.194 17:24:31 -- target/abort.sh@38 -- # nvmftestfini 00:15:12.194 17:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:12.194 17:24:31 -- nvmf/common.sh@116 -- # sync 00:15:12.194 17:24:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:12.194 17:24:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:12.194 17:24:31 -- nvmf/common.sh@119 -- # set +e 00:15:12.194 17:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:12.194 17:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:12.194 rmmod nvme_rdma 00:15:12.453 rmmod nvme_fabrics 00:15:12.453 17:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:12.453 17:24:31 -- nvmf/common.sh@123 -- # set -e 00:15:12.453 17:24:31 -- nvmf/common.sh@124 -- # return 0 00:15:12.453 17:24:31 -- nvmf/common.sh@477 -- # '[' -n 2647811 ']' 00:15:12.453 17:24:31 -- nvmf/common.sh@478 -- # killprocess 2647811 00:15:12.453 17:24:31 -- common/autotest_common.sh@936 -- # '[' -z 2647811 ']' 00:15:12.453 17:24:31 -- common/autotest_common.sh@940 -- # kill -0 2647811 00:15:12.453 17:24:31 -- common/autotest_common.sh@941 -- # uname 00:15:12.453 17:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.453 17:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2647811 00:15:12.453 17:24:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.453 17:24:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.453 17:24:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2647811' 00:15:12.453 killing process with pid 2647811 00:15:12.453 17:24:32 -- common/autotest_common.sh@955 -- # kill 2647811 00:15:12.453 17:24:32 -- common/autotest_common.sh@960 -- # wait 2647811 00:15:12.713 17:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:12.713 17:24:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:12.713 00:15:12.713 real 0m10.623s 00:15:12.713 user 0m14.552s 00:15:12.713 sys 0m5.680s 00:15:12.713 17:24:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:12.713 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:12.713 ************************************ 00:15:12.713 END TEST nvmf_abort 00:15:12.713 ************************************ 00:15:12.713 17:24:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:12.713 17:24:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.713 17:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.713 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:12.713 ************************************ 00:15:12.713 START TEST nvmf_ns_hotplug_stress 00:15:12.713 ************************************ 00:15:12.713 17:24:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:12.713 * Looking for test storage... 00:15:12.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:12.972 17:24:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:12.972 17:24:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:12.972 17:24:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:12.972 17:24:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:12.972 17:24:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:12.972 17:24:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:12.972 17:24:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:12.972 17:24:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:12.972 17:24:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:12.972 17:24:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.972 17:24:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:12.972 17:24:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:12.972 17:24:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:12.972 17:24:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:12.972 17:24:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:12.972 17:24:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:12.972 17:24:32 -- scripts/common.sh@344 -- # : 1 00:15:12.972 17:24:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:12.972 17:24:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.972 17:24:32 -- scripts/common.sh@364 -- # decimal 1 00:15:12.972 17:24:32 -- scripts/common.sh@352 -- # local d=1 00:15:12.972 17:24:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.972 17:24:32 -- scripts/common.sh@354 -- # echo 1 00:15:12.972 17:24:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:12.972 17:24:32 -- scripts/common.sh@365 -- # decimal 2 00:15:12.972 17:24:32 -- scripts/common.sh@352 -- # local d=2 00:15:12.972 17:24:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.972 17:24:32 -- scripts/common.sh@354 -- # echo 2 00:15:12.972 17:24:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:12.972 17:24:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:12.972 17:24:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:12.972 17:24:32 -- scripts/common.sh@367 -- # return 0 00:15:12.973 17:24:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.973 17:24:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.973 --rc genhtml_branch_coverage=1 00:15:12.973 --rc genhtml_function_coverage=1 00:15:12.973 --rc genhtml_legend=1 00:15:12.973 --rc geninfo_all_blocks=1 00:15:12.973 --rc geninfo_unexecuted_blocks=1 00:15:12.973 00:15:12.973 ' 00:15:12.973 17:24:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.973 --rc genhtml_branch_coverage=1 00:15:12.973 --rc genhtml_function_coverage=1 00:15:12.973 --rc genhtml_legend=1 00:15:12.973 --rc geninfo_all_blocks=1 00:15:12.973 --rc geninfo_unexecuted_blocks=1 00:15:12.973 00:15:12.973 ' 00:15:12.973 17:24:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.973 --rc genhtml_branch_coverage=1 00:15:12.973 --rc genhtml_function_coverage=1 00:15:12.973 --rc genhtml_legend=1 00:15:12.973 --rc geninfo_all_blocks=1 00:15:12.973 --rc geninfo_unexecuted_blocks=1 00:15:12.973 00:15:12.973 ' 00:15:12.973 17:24:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.973 --rc genhtml_branch_coverage=1 00:15:12.973 --rc genhtml_function_coverage=1 00:15:12.973 --rc genhtml_legend=1 00:15:12.973 --rc geninfo_all_blocks=1 00:15:12.973 --rc geninfo_unexecuted_blocks=1 00:15:12.973 00:15:12.973 ' 00:15:12.973 17:24:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.973 17:24:32 -- nvmf/common.sh@7 -- # uname -s 00:15:12.973 17:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.973 17:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.973 17:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.973 17:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.973 17:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.973 17:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.973 17:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.973 17:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.973 17:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.973 17:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.973 17:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:12.973 17:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:12.973 17:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.973 17:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.973 17:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.973 17:24:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:12.973 17:24:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.973 17:24:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.973 17:24:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.973 17:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.973 17:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.973 17:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.973 17:24:32 -- paths/export.sh@5 -- # export PATH 00:15:12.973 17:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.973 17:24:32 -- nvmf/common.sh@46 -- # : 0 00:15:12.973 17:24:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.973 17:24:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.973 17:24:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.973 17:24:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.973 17:24:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.973 17:24:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.973 17:24:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.973 17:24:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.973 17:24:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:12.973 17:24:32 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:12.973 17:24:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:12.973 17:24:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.973 17:24:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.973 17:24:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.973 17:24:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.973 17:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.973 17:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.973 17:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.973 17:24:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:12.973 17:24:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:12.973 17:24:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:12.973 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:19.540 17:24:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:19.540 17:24:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:19.540 17:24:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:19.540 17:24:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:19.540 17:24:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:19.540 17:24:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:19.540 17:24:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:19.540 17:24:38 -- nvmf/common.sh@294 -- # net_devs=() 00:15:19.540 17:24:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:19.540 17:24:38 -- nvmf/common.sh@295 -- # e810=() 00:15:19.540 17:24:38 -- nvmf/common.sh@295 -- # local -ga e810 00:15:19.540 17:24:38 -- nvmf/common.sh@296 -- # x722=() 00:15:19.540 17:24:38 -- nvmf/common.sh@296 -- # local -ga x722 00:15:19.540 17:24:38 -- nvmf/common.sh@297 -- # mlx=() 00:15:19.540 17:24:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:19.540 17:24:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.540 17:24:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:19.540 17:24:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:19.540 17:24:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:19.540 17:24:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:19.540 17:24:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:19.540 17:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:19.540 17:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:19.540 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:19.540 17:24:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:19.540 17:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:19.540 17:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:19.540 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:19.540 17:24:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:19.540 17:24:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:19.540 17:24:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:19.540 17:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:19.540 17:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.540 17:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:19.540 17:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.540 17:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:19.540 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:19.540 17:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.540 17:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:19.540 17:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.540 17:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:19.540 17:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.540 17:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:19.540 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.541 17:24:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:19.541 17:24:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:19.541 17:24:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:19.541 17:24:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:19.541 17:24:38 -- nvmf/common.sh@57 -- # uname 00:15:19.541 17:24:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:19.541 17:24:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:19.541 17:24:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:19.541 17:24:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:19.541 17:24:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:19.541 17:24:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:19.541 17:24:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:19.541 17:24:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:19.541 17:24:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:19.541 17:24:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:19.541 17:24:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:19.541 17:24:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:19.541 17:24:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:19.541 17:24:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:19.541 17:24:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:19.541 17:24:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:19.541 17:24:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@104 -- # continue 2 00:15:19.541 17:24:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@104 -- # continue 2 00:15:19.541 17:24:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:19.541 17:24:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:19.541 17:24:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:19.541 17:24:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:19.541 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:19.541 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:19.541 altname enp217s0f0np0 00:15:19.541 altname ens818f0np0 00:15:19.541 inet 192.168.100.8/24 scope global mlx_0_0 00:15:19.541 valid_lft forever preferred_lft forever 00:15:19.541 17:24:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:19.541 17:24:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:19.541 17:24:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:19.541 17:24:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:19.541 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:19.541 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:19.541 altname enp217s0f1np1 00:15:19.541 altname ens818f1np1 00:15:19.541 inet 192.168.100.9/24 scope global mlx_0_1 00:15:19.541 valid_lft forever preferred_lft forever 00:15:19.541 17:24:38 -- nvmf/common.sh@410 -- # return 0 00:15:19.541 17:24:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:19.541 17:24:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:19.541 17:24:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:19.541 17:24:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:19.541 17:24:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:19.541 17:24:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:19.541 17:24:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:19.541 17:24:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:19.541 17:24:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:19.541 17:24:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@104 -- # continue 2 00:15:19.541 17:24:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:19.541 17:24:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:19.541 17:24:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@104 -- # continue 2 00:15:19.541 17:24:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:19.541 17:24:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:19.541 17:24:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:19.541 17:24:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:19.541 17:24:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:19.541 17:24:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:19.541 192.168.100.9' 00:15:19.541 17:24:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:19.541 192.168.100.9' 00:15:19.541 17:24:39 -- nvmf/common.sh@445 -- # head -n 1 00:15:19.541 17:24:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:19.541 17:24:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:19.541 192.168.100.9' 00:15:19.541 17:24:39 -- nvmf/common.sh@446 -- # tail -n +2 00:15:19.541 17:24:39 -- nvmf/common.sh@446 -- # head -n 1 00:15:19.541 17:24:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:19.541 17:24:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:19.541 17:24:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:19.541 17:24:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:19.541 17:24:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:19.541 17:24:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:19.541 17:24:39 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:19.541 17:24:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:19.541 17:24:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.541 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:15:19.541 17:24:39 -- nvmf/common.sh@469 -- # nvmfpid=2651668 00:15:19.541 17:24:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:19.541 17:24:39 -- nvmf/common.sh@470 -- # waitforlisten 2651668 00:15:19.541 17:24:39 -- common/autotest_common.sh@829 -- # '[' -z 2651668 ']' 00:15:19.541 17:24:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.541 17:24:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.541 17:24:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.541 17:24:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.541 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:15:19.541 [2024-11-09 17:24:39.102937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.541 [2024-11-09 17:24:39.102997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.541 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.541 [2024-11-09 17:24:39.175099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:19.541 [2024-11-09 17:24:39.249671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:19.541 [2024-11-09 17:24:39.249788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.541 [2024-11-09 17:24:39.249813] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.541 [2024-11-09 17:24:39.249825] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.541 [2024-11-09 17:24:39.249949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.541 [2024-11-09 17:24:39.250033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.541 [2024-11-09 17:24:39.250035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.479 17:24:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.479 17:24:39 -- common/autotest_common.sh@862 -- # return 0 00:15:20.479 17:24:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:20.479 17:24:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.479 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:15:20.479 17:24:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.479 17:24:39 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:20.479 17:24:39 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:20.479 [2024-11-09 17:24:40.155907] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e28860/0x1e2cd50) succeed. 00:15:20.479 [2024-11-09 17:24:40.164911] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e29db0/0x1e6e3f0) succeed. 00:15:20.738 17:24:40 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:20.738 17:24:40 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.997 [2024-11-09 17:24:40.647249] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.997 17:24:40 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:21.256 17:24:40 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:21.515 Malloc0 00:15:21.515 17:24:41 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:21.515 Delay0 00:15:21.515 17:24:41 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.774 17:24:41 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:22.034 NULL1 00:15:22.034 17:24:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:22.034 17:24:41 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:22.034 17:24:41 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2652141 00:15:22.034 17:24:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:22.035 17:24:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.231 Read completed with error (sct=0, sc=11) 00:15:23.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.231 17:24:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.490 17:24:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:23.490 17:24:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:23.750 true 00:15:23.750 17:24:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:23.750 17:24:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 17:24:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.688 17:24:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:24.688 17:24:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:24.947 true 00:15:24.947 17:24:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:24.947 17:24:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 17:24:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.885 17:24:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:25.885 17:24:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:26.144 true 00:15:26.144 17:24:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:26.144 17:24:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 17:24:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.082 17:24:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:27.082 17:24:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:27.341 true 00:15:27.341 17:24:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:27.341 17:24:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 17:24:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.278 17:24:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:28.278 17:24:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:28.537 true 00:15:28.537 17:24:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:28.537 17:24:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.474 17:24:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.474 17:24:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:29.474 17:24:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:29.732 true 00:15:29.732 17:24:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:29.732 17:24:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.732 17:24:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.991 17:24:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:29.991 17:24:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:30.249 true 00:15:30.249 17:24:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:30.249 17:24:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 17:24:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.625 17:24:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:31.625 17:24:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:31.625 true 00:15:31.625 17:24:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:31.625 17:24:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.560 17:24:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.818 17:24:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:32.818 17:24:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:32.818 true 00:15:32.818 17:24:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:32.818 17:24:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.755 17:24:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.014 17:24:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:34.014 17:24:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:34.014 true 00:15:34.273 17:24:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:34.273 17:24:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 17:24:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.100 17:24:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:35.100 17:24:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:35.359 true 00:15:35.359 17:24:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:35.359 17:24:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 17:24:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.296 17:24:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:36.296 17:24:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:36.555 true 00:15:36.555 17:24:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:36.555 17:24:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.492 17:24:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.492 17:24:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:37.492 17:24:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:37.751 true 00:15:37.751 17:24:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:37.751 17:24:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 17:24:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.688 17:24:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:38.688 17:24:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:38.947 true 00:15:38.947 17:24:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:38.947 17:24:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 17:24:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.884 17:24:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:39.884 17:24:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:40.143 true 00:15:40.143 17:24:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:40.143 17:24:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 17:25:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.079 17:25:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:41.079 17:25:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:41.337 true 00:15:41.337 17:25:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:41.337 17:25:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.273 17:25:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.274 17:25:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:42.274 17:25:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:42.532 true 00:15:42.532 17:25:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:42.532 17:25:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 17:25:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.469 17:25:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:43.469 17:25:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:43.728 true 00:15:43.728 17:25:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:43.728 17:25:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.665 17:25:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.925 17:25:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:44.925 17:25:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:44.925 true 00:15:44.925 17:25:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:44.925 17:25:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.861 17:25:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.120 17:25:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:46.120 17:25:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:46.120 true 00:15:46.379 17:25:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:46.379 17:25:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 17:25:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.207 17:25:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:47.207 17:25:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:47.466 true 00:15:47.466 17:25:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:47.466 17:25:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 17:25:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.405 17:25:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:48.405 17:25:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:48.665 true 00:15:48.665 17:25:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:48.665 17:25:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 17:25:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.605 17:25:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:49.605 17:25:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:49.865 true 00:15:49.865 17:25:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:49.865 17:25:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 17:25:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.061 17:25:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:51.061 17:25:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:51.061 true 00:15:51.061 17:25:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:51.061 17:25:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 17:25:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.163 17:25:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:52.163 17:25:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:52.422 true 00:15:52.423 17:25:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:52.423 17:25:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.360 17:25:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.360 17:25:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:53.360 17:25:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:53.619 true 00:15:53.619 17:25:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:53.619 17:25:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.619 17:25:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.878 17:25:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:53.878 17:25:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:54.137 true 00:15:54.137 17:25:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:54.137 17:25:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.396 17:25:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.396 17:25:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:54.396 17:25:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:54.655 true 00:15:54.655 17:25:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:54.655 17:25:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.914 17:25:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.914 17:25:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:54.914 17:25:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:55.173 true 00:15:55.173 Initializing NVMe Controllers 00:15:55.173 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.173 Controller IO queue size 128, less than required. 00:15:55.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:55.173 Controller IO queue size 128, less than required. 00:15:55.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:55.174 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.174 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:55.174 Initialization complete. Launching workers. 00:15:55.174 ======================================================== 00:15:55.174 Latency(us) 00:15:55.174 Device Information : IOPS MiB/s Average min max 00:15:55.174 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6054.60 2.96 18954.67 863.22 1133051.30 00:15:55.174 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36052.67 17.60 3550.24 1274.98 285757.14 00:15:55.174 ======================================================== 00:15:55.174 Total : 42107.27 20.56 5765.24 863.22 1133051.30 00:15:55.174 00:15:55.174 17:25:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2652141 00:15:55.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2652141) - No such process 00:15:55.174 17:25:14 -- target/ns_hotplug_stress.sh@53 -- # wait 2652141 00:15:55.174 17:25:14 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.433 17:25:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:55.692 null0 00:15:55.692 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.692 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.692 17:25:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:55.951 null1 00:15:55.951 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.951 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.951 17:25:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:56.210 null2 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:56.210 null3 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.210 17:25:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:56.469 null4 00:15:56.469 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.469 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.469 17:25:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:56.728 null5 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:56.728 null6 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.728 17:25:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:56.988 null7 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.988 17:25:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@66 -- # wait 2658299 2658302 2658305 2658308 2658311 2658314 2658317 2658320 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.989 17:25:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.248 17:25:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.507 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.767 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.026 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.286 17:25:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.286 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.286 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.286 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.286 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.545 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.805 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.065 17:25:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.324 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.324 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.324 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.324 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.325 17:25:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.584 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.843 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.844 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.101 17:25:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.360 17:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.360 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.360 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.360 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:00.618 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.618 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.619 17:25:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:00.877 17:25:20 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:00.877 17:25:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:00.877 17:25:20 -- nvmf/common.sh@116 -- # sync 00:16:00.877 17:25:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:00.877 17:25:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:00.877 17:25:20 -- nvmf/common.sh@119 -- # set +e 00:16:00.877 17:25:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:00.877 17:25:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:00.877 rmmod nvme_rdma 00:16:00.877 rmmod nvme_fabrics 00:16:00.877 17:25:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:00.877 17:25:20 -- nvmf/common.sh@123 -- # set -e 00:16:00.877 17:25:20 -- nvmf/common.sh@124 -- # return 0 00:16:00.877 17:25:20 -- nvmf/common.sh@477 -- # '[' -n 2651668 ']' 00:16:00.877 17:25:20 -- nvmf/common.sh@478 -- # killprocess 2651668 00:16:00.878 17:25:20 -- common/autotest_common.sh@936 -- # '[' -z 2651668 ']' 00:16:00.878 17:25:20 -- common/autotest_common.sh@940 -- # kill -0 2651668 00:16:00.878 17:25:20 -- common/autotest_common.sh@941 -- # uname 00:16:00.878 17:25:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.878 17:25:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2651668 00:16:00.878 17:25:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:00.878 17:25:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:00.878 17:25:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2651668' 00:16:00.878 killing process with pid 2651668 00:16:01.136 17:25:20 -- common/autotest_common.sh@955 -- # kill 2651668 00:16:01.136 17:25:20 -- common/autotest_common.sh@960 -- # wait 2651668 00:16:01.395 17:25:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:01.395 17:25:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:01.395 00:16:01.395 real 0m48.530s 00:16:01.395 user 3m19.761s 00:16:01.395 sys 0m13.427s 00:16:01.395 17:25:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:01.396 17:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:01.396 ************************************ 00:16:01.396 END TEST nvmf_ns_hotplug_stress 00:16:01.396 ************************************ 00:16:01.396 17:25:20 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:01.396 17:25:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.396 17:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.396 17:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:01.396 ************************************ 00:16:01.396 START TEST nvmf_connect_stress 00:16:01.396 ************************************ 00:16:01.396 17:25:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:01.396 * Looking for test storage... 00:16:01.396 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:01.396 17:25:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:01.396 17:25:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:01.396 17:25:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:01.396 17:25:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:01.396 17:25:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:01.396 17:25:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:01.396 17:25:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:01.396 17:25:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:01.396 17:25:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:01.396 17:25:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.396 17:25:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:01.396 17:25:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:01.396 17:25:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:01.396 17:25:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:01.396 17:25:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:01.396 17:25:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:01.396 17:25:21 -- scripts/common.sh@344 -- # : 1 00:16:01.396 17:25:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:01.396 17:25:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.396 17:25:21 -- scripts/common.sh@364 -- # decimal 1 00:16:01.396 17:25:21 -- scripts/common.sh@352 -- # local d=1 00:16:01.396 17:25:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.396 17:25:21 -- scripts/common.sh@354 -- # echo 1 00:16:01.396 17:25:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:01.396 17:25:21 -- scripts/common.sh@365 -- # decimal 2 00:16:01.396 17:25:21 -- scripts/common.sh@352 -- # local d=2 00:16:01.396 17:25:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.396 17:25:21 -- scripts/common.sh@354 -- # echo 2 00:16:01.396 17:25:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:01.396 17:25:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:01.396 17:25:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:01.396 17:25:21 -- scripts/common.sh@367 -- # return 0 00:16:01.396 17:25:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.396 17:25:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:01.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.396 --rc genhtml_branch_coverage=1 00:16:01.396 --rc genhtml_function_coverage=1 00:16:01.396 --rc genhtml_legend=1 00:16:01.396 --rc geninfo_all_blocks=1 00:16:01.396 --rc geninfo_unexecuted_blocks=1 00:16:01.396 00:16:01.396 ' 00:16:01.396 17:25:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:01.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.396 --rc genhtml_branch_coverage=1 00:16:01.396 --rc genhtml_function_coverage=1 00:16:01.396 --rc genhtml_legend=1 00:16:01.396 --rc geninfo_all_blocks=1 00:16:01.396 --rc geninfo_unexecuted_blocks=1 00:16:01.396 00:16:01.396 ' 00:16:01.396 17:25:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:01.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.396 --rc genhtml_branch_coverage=1 00:16:01.396 --rc genhtml_function_coverage=1 00:16:01.396 --rc genhtml_legend=1 00:16:01.396 --rc geninfo_all_blocks=1 00:16:01.396 --rc geninfo_unexecuted_blocks=1 00:16:01.396 00:16:01.396 ' 00:16:01.396 17:25:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:01.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.396 --rc genhtml_branch_coverage=1 00:16:01.396 --rc genhtml_function_coverage=1 00:16:01.396 --rc genhtml_legend=1 00:16:01.396 --rc geninfo_all_blocks=1 00:16:01.396 --rc geninfo_unexecuted_blocks=1 00:16:01.396 00:16:01.396 ' 00:16:01.396 17:25:21 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.396 17:25:21 -- nvmf/common.sh@7 -- # uname -s 00:16:01.396 17:25:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.396 17:25:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.396 17:25:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.396 17:25:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.396 17:25:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.396 17:25:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.396 17:25:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.396 17:25:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.396 17:25:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.396 17:25:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.655 17:25:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:01.655 17:25:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:01.655 17:25:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.655 17:25:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.655 17:25:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.655 17:25:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:01.655 17:25:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.655 17:25:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.655 17:25:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.655 17:25:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.655 17:25:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.655 17:25:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.655 17:25:21 -- paths/export.sh@5 -- # export PATH 00:16:01.655 17:25:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.655 17:25:21 -- nvmf/common.sh@46 -- # : 0 00:16:01.655 17:25:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:01.655 17:25:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:01.655 17:25:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:01.655 17:25:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.655 17:25:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.655 17:25:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:01.655 17:25:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:01.655 17:25:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:01.655 17:25:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:01.655 17:25:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:01.655 17:25:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.655 17:25:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:01.655 17:25:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:01.656 17:25:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:01.656 17:25:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.656 17:25:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.656 17:25:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.656 17:25:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:01.656 17:25:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:01.656 17:25:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:01.656 17:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:09.780 17:25:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:09.780 17:25:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:09.780 17:25:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:09.780 17:25:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:09.780 17:25:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:09.780 17:25:28 -- nvmf/common.sh@294 -- # net_devs=() 00:16:09.780 17:25:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@295 -- # e810=() 00:16:09.780 17:25:28 -- nvmf/common.sh@295 -- # local -ga e810 00:16:09.780 17:25:28 -- nvmf/common.sh@296 -- # x722=() 00:16:09.780 17:25:28 -- nvmf/common.sh@296 -- # local -ga x722 00:16:09.780 17:25:28 -- nvmf/common.sh@297 -- # mlx=() 00:16:09.780 17:25:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:09.780 17:25:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.780 17:25:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:09.780 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:09.780 17:25:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:09.780 17:25:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:09.780 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:09.780 17:25:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:09.780 17:25:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.780 17:25:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.780 17:25:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:09.780 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.780 17:25:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.780 17:25:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:09.780 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:09.780 17:25:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.780 17:25:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:09.780 17:25:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:09.780 17:25:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:09.780 17:25:28 -- nvmf/common.sh@57 -- # uname 00:16:09.780 17:25:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:09.780 17:25:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:09.780 17:25:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:09.780 17:25:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:09.780 17:25:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:09.780 17:25:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:09.780 17:25:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:09.780 17:25:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:09.780 17:25:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:09.780 17:25:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:09.780 17:25:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:09.780 17:25:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:09.780 17:25:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:09.780 17:25:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@104 -- # continue 2 00:16:09.780 17:25:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:09.780 17:25:28 -- nvmf/common.sh@104 -- # continue 2 00:16:09.780 17:25:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:09.780 17:25:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:09.780 17:25:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:09.780 17:25:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:09.780 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:09.780 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:09.780 altname enp217s0f0np0 00:16:09.780 altname ens818f0np0 00:16:09.780 inet 192.168.100.8/24 scope global mlx_0_0 00:16:09.780 valid_lft forever preferred_lft forever 00:16:09.780 17:25:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:09.780 17:25:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:09.780 17:25:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:09.780 17:25:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:09.780 17:25:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:09.780 17:25:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:09.780 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:09.780 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:09.780 altname enp217s0f1np1 00:16:09.780 altname ens818f1np1 00:16:09.780 inet 192.168.100.9/24 scope global mlx_0_1 00:16:09.780 valid_lft forever preferred_lft forever 00:16:09.780 17:25:28 -- nvmf/common.sh@410 -- # return 0 00:16:09.780 17:25:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:09.780 17:25:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:09.780 17:25:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:09.780 17:25:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:09.780 17:25:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:09.780 17:25:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:09.780 17:25:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:09.780 17:25:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:09.780 17:25:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:09.780 17:25:28 -- nvmf/common.sh@104 -- # continue 2 00:16:09.780 17:25:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:09.780 17:25:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.780 17:25:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:09.781 17:25:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:09.781 17:25:28 -- nvmf/common.sh@104 -- # continue 2 00:16:09.781 17:25:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:09.781 17:25:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:09.781 17:25:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:09.781 17:25:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:09.781 17:25:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:09.781 17:25:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:09.781 17:25:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:09.781 17:25:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:09.781 192.168.100.9' 00:16:09.781 17:25:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:09.781 192.168.100.9' 00:16:09.781 17:25:28 -- nvmf/common.sh@445 -- # head -n 1 00:16:09.781 17:25:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:09.781 17:25:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:09.781 192.168.100.9' 00:16:09.781 17:25:28 -- nvmf/common.sh@446 -- # tail -n +2 00:16:09.781 17:25:28 -- nvmf/common.sh@446 -- # head -n 1 00:16:09.781 17:25:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:09.781 17:25:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:09.781 17:25:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:09.781 17:25:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:09.781 17:25:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:09.781 17:25:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:09.781 17:25:28 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:09.781 17:25:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:09.781 17:25:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.781 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 17:25:28 -- nvmf/common.sh@469 -- # nvmfpid=2662560 00:16:09.781 17:25:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:09.781 17:25:28 -- nvmf/common.sh@470 -- # waitforlisten 2662560 00:16:09.781 17:25:28 -- common/autotest_common.sh@829 -- # '[' -z 2662560 ']' 00:16:09.781 17:25:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.781 17:25:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.781 17:25:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.781 17:25:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.781 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 [2024-11-09 17:25:28.316910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.781 [2024-11-09 17:25:28.316973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.781 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.781 [2024-11-09 17:25:28.389299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.781 [2024-11-09 17:25:28.463961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.781 [2024-11-09 17:25:28.464088] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.781 [2024-11-09 17:25:28.464098] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.781 [2024-11-09 17:25:28.464107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.781 [2024-11-09 17:25:28.464216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.781 [2024-11-09 17:25:28.464298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.781 [2024-11-09 17:25:28.464300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.781 17:25:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.781 17:25:29 -- common/autotest_common.sh@862 -- # return 0 00:16:09.781 17:25:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.781 17:25:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 17:25:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.781 17:25:29 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:09.781 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 [2024-11-09 17:25:29.213356] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15da860/0x15ded50) succeed. 00:16:09.781 [2024-11-09 17:25:29.222380] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15dbdb0/0x16203f0) succeed. 00:16:09.781 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.781 17:25:29 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.781 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.781 17:25:29 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:09.781 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 [2024-11-09 17:25:29.339857] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:09.781 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.781 17:25:29 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:09.781 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.781 NULL1 00:16:09.781 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.781 17:25:29 -- target/connect_stress.sh@21 -- # PERF_PID=2662847 00:16:09.781 17:25:29 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:09.781 17:25:29 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:09.781 17:25:29 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:09.781 17:25:29 -- target/connect_stress.sh@28 -- # cat 00:16:09.781 17:25:29 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:09.781 17:25:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.781 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.781 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.046 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.046 17:25:29 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:10.046 17:25:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.046 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.046 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 17:25:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.612 17:25:30 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:10.612 17:25:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.612 17:25:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.612 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.870 17:25:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.870 17:25:30 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:10.870 17:25:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.870 17:25:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.870 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:11.129 17:25:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.129 17:25:30 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:11.129 17:25:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.129 17:25:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.129 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:11.388 17:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.388 17:25:31 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:11.388 17:25:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.388 17:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.388 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.956 17:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.956 17:25:31 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:11.956 17:25:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.956 17:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.956 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:16:12.214 17:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.214 17:25:31 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:12.214 17:25:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.215 17:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.215 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:16:12.473 17:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.473 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:12.474 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.474 17:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.474 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.732 17:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.732 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:12.732 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.732 17:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.732 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.991 17:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.991 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:12.991 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.991 17:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.991 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:16:13.559 17:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.559 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:13.559 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.559 17:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.559 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.817 17:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.818 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:13.818 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.818 17:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.818 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 17:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.076 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:14.076 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.076 17:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.076 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:14.335 17:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.335 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:14.335 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.335 17:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.335 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.595 17:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.595 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:14.595 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.595 17:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.595 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:15.166 17:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.166 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:15.166 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.166 17:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.166 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 17:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.424 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:15.424 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.424 17:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.424 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:15.683 17:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.683 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:15.683 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.683 17:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.683 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.942 17:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.942 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:15.942 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.942 17:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.942 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.509 17:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.509 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:16.509 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.509 17:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.509 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.767 17:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.767 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:16.767 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.767 17:25:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.767 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.025 17:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.025 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:17.025 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.025 17:25:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.025 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.284 17:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.284 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:17.284 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.284 17:25:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.284 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.542 17:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.542 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:17.543 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.543 17:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.543 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.110 17:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.110 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:18.110 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.110 17:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.110 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.368 17:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.368 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:18.368 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.368 17:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.369 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.628 17:25:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.628 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:18.628 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.628 17:25:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.628 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:16:18.887 17:25:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.887 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:18.887 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.887 17:25:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.887 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:16:19.457 17:25:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.457 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:19.457 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.457 17:25:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.457 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:16:19.716 17:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.716 17:25:39 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:19.716 17:25:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.716 17:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.716 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:16:19.975 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.975 17:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.975 17:25:39 -- target/connect_stress.sh@34 -- # kill -0 2662847 00:16:19.975 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2662847) - No such process 00:16:19.975 17:25:39 -- target/connect_stress.sh@38 -- # wait 2662847 00:16:19.975 17:25:39 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:19.975 17:25:39 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:19.975 17:25:39 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:19.975 17:25:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:19.975 17:25:39 -- nvmf/common.sh@116 -- # sync 00:16:19.975 17:25:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:19.975 17:25:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:19.975 17:25:39 -- nvmf/common.sh@119 -- # set +e 00:16:19.975 17:25:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:19.975 17:25:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:19.975 rmmod nvme_rdma 00:16:19.975 rmmod nvme_fabrics 00:16:19.975 17:25:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:19.975 17:25:39 -- nvmf/common.sh@123 -- # set -e 00:16:19.975 17:25:39 -- nvmf/common.sh@124 -- # return 0 00:16:19.975 17:25:39 -- nvmf/common.sh@477 -- # '[' -n 2662560 ']' 00:16:19.975 17:25:39 -- nvmf/common.sh@478 -- # killprocess 2662560 00:16:19.975 17:25:39 -- common/autotest_common.sh@936 -- # '[' -z 2662560 ']' 00:16:19.975 17:25:39 -- common/autotest_common.sh@940 -- # kill -0 2662560 00:16:19.975 17:25:39 -- common/autotest_common.sh@941 -- # uname 00:16:19.975 17:25:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.975 17:25:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2662560 00:16:19.975 17:25:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:19.975 17:25:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:19.975 17:25:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2662560' 00:16:19.975 killing process with pid 2662560 00:16:19.975 17:25:39 -- common/autotest_common.sh@955 -- # kill 2662560 00:16:19.975 17:25:39 -- common/autotest_common.sh@960 -- # wait 2662560 00:16:20.315 17:25:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:20.315 17:25:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:20.315 00:16:20.315 real 0m19.017s 00:16:20.315 user 0m42.100s 00:16:20.315 sys 0m7.953s 00:16:20.315 17:25:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:20.315 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:16:20.315 ************************************ 00:16:20.315 END TEST nvmf_connect_stress 00:16:20.315 ************************************ 00:16:20.315 17:25:40 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:20.315 17:25:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:20.315 17:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.315 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:20.315 ************************************ 00:16:20.315 START TEST nvmf_fused_ordering 00:16:20.315 ************************************ 00:16:20.315 17:25:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:20.576 * Looking for test storage... 00:16:20.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:20.576 17:25:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:20.576 17:25:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:20.576 17:25:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:20.576 17:25:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:20.576 17:25:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:20.576 17:25:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:20.576 17:25:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:20.576 17:25:40 -- scripts/common.sh@335 -- # IFS=.-: 00:16:20.576 17:25:40 -- scripts/common.sh@335 -- # read -ra ver1 00:16:20.576 17:25:40 -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.576 17:25:40 -- scripts/common.sh@336 -- # read -ra ver2 00:16:20.576 17:25:40 -- scripts/common.sh@337 -- # local 'op=<' 00:16:20.576 17:25:40 -- scripts/common.sh@339 -- # ver1_l=2 00:16:20.576 17:25:40 -- scripts/common.sh@340 -- # ver2_l=1 00:16:20.576 17:25:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:20.577 17:25:40 -- scripts/common.sh@343 -- # case "$op" in 00:16:20.577 17:25:40 -- scripts/common.sh@344 -- # : 1 00:16:20.577 17:25:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:20.577 17:25:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.577 17:25:40 -- scripts/common.sh@364 -- # decimal 1 00:16:20.577 17:25:40 -- scripts/common.sh@352 -- # local d=1 00:16:20.577 17:25:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.577 17:25:40 -- scripts/common.sh@354 -- # echo 1 00:16:20.577 17:25:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:20.577 17:25:40 -- scripts/common.sh@365 -- # decimal 2 00:16:20.577 17:25:40 -- scripts/common.sh@352 -- # local d=2 00:16:20.577 17:25:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.577 17:25:40 -- scripts/common.sh@354 -- # echo 2 00:16:20.577 17:25:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:20.577 17:25:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:20.577 17:25:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:20.577 17:25:40 -- scripts/common.sh@367 -- # return 0 00:16:20.577 17:25:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.577 17:25:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:20.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.577 --rc genhtml_branch_coverage=1 00:16:20.577 --rc genhtml_function_coverage=1 00:16:20.577 --rc genhtml_legend=1 00:16:20.577 --rc geninfo_all_blocks=1 00:16:20.577 --rc geninfo_unexecuted_blocks=1 00:16:20.577 00:16:20.577 ' 00:16:20.577 17:25:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:20.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.577 --rc genhtml_branch_coverage=1 00:16:20.577 --rc genhtml_function_coverage=1 00:16:20.577 --rc genhtml_legend=1 00:16:20.577 --rc geninfo_all_blocks=1 00:16:20.577 --rc geninfo_unexecuted_blocks=1 00:16:20.577 00:16:20.577 ' 00:16:20.577 17:25:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:20.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.577 --rc genhtml_branch_coverage=1 00:16:20.577 --rc genhtml_function_coverage=1 00:16:20.577 --rc genhtml_legend=1 00:16:20.577 --rc geninfo_all_blocks=1 00:16:20.577 --rc geninfo_unexecuted_blocks=1 00:16:20.577 00:16:20.577 ' 00:16:20.577 17:25:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:20.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.577 --rc genhtml_branch_coverage=1 00:16:20.577 --rc genhtml_function_coverage=1 00:16:20.577 --rc genhtml_legend=1 00:16:20.577 --rc geninfo_all_blocks=1 00:16:20.577 --rc geninfo_unexecuted_blocks=1 00:16:20.577 00:16:20.577 ' 00:16:20.577 17:25:40 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.577 17:25:40 -- nvmf/common.sh@7 -- # uname -s 00:16:20.577 17:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.577 17:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.577 17:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.577 17:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.577 17:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.577 17:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.577 17:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.577 17:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.577 17:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.577 17:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.577 17:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:20.577 17:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:20.577 17:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.577 17:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.577 17:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.577 17:25:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:20.577 17:25:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.577 17:25:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.577 17:25:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.577 17:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.577 17:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.577 17:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.577 17:25:40 -- paths/export.sh@5 -- # export PATH 00:16:20.577 17:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.577 17:25:40 -- nvmf/common.sh@46 -- # : 0 00:16:20.577 17:25:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.577 17:25:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.577 17:25:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.577 17:25:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.577 17:25:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.577 17:25:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.577 17:25:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.577 17:25:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.577 17:25:40 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:20.577 17:25:40 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:20.577 17:25:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.577 17:25:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.577 17:25:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.577 17:25:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.577 17:25:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.577 17:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.577 17:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.577 17:25:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:20.577 17:25:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:20.577 17:25:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:20.577 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:28.704 17:25:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:28.704 17:25:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:28.704 17:25:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:28.704 17:25:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:28.704 17:25:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:28.704 17:25:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:28.704 17:25:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:28.704 17:25:47 -- nvmf/common.sh@294 -- # net_devs=() 00:16:28.704 17:25:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:28.704 17:25:47 -- nvmf/common.sh@295 -- # e810=() 00:16:28.704 17:25:47 -- nvmf/common.sh@295 -- # local -ga e810 00:16:28.704 17:25:47 -- nvmf/common.sh@296 -- # x722=() 00:16:28.704 17:25:47 -- nvmf/common.sh@296 -- # local -ga x722 00:16:28.704 17:25:47 -- nvmf/common.sh@297 -- # mlx=() 00:16:28.704 17:25:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:28.704 17:25:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.704 17:25:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:28.704 17:25:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:28.704 17:25:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:28.704 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:28.704 17:25:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:28.704 17:25:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:28.704 17:25:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:28.704 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:28.704 17:25:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:28.704 17:25:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:28.704 17:25:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:28.704 17:25:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.704 17:25:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:28.704 17:25:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.704 17:25:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:28.704 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:28.704 17:25:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:28.704 17:25:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.704 17:25:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:28.704 17:25:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.704 17:25:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:28.704 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:28.704 17:25:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.704 17:25:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:28.704 17:25:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:28.704 17:25:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:28.704 17:25:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:28.704 17:25:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:28.704 17:25:47 -- nvmf/common.sh@57 -- # uname 00:16:28.704 17:25:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:28.704 17:25:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:28.704 17:25:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:28.704 17:25:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:28.704 17:25:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:28.704 17:25:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:28.704 17:25:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:28.704 17:25:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:28.704 17:25:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:28.704 17:25:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:28.704 17:25:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:28.704 17:25:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:28.704 17:25:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:28.704 17:25:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:28.704 17:25:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:28.705 17:25:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:28.705 17:25:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@104 -- # continue 2 00:16:28.705 17:25:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@104 -- # continue 2 00:16:28.705 17:25:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:28.705 17:25:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:28.705 17:25:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:28.705 17:25:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:28.705 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:28.705 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:28.705 altname enp217s0f0np0 00:16:28.705 altname ens818f0np0 00:16:28.705 inet 192.168.100.8/24 scope global mlx_0_0 00:16:28.705 valid_lft forever preferred_lft forever 00:16:28.705 17:25:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:28.705 17:25:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:28.705 17:25:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:28.705 17:25:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:28.705 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:28.705 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:28.705 altname enp217s0f1np1 00:16:28.705 altname ens818f1np1 00:16:28.705 inet 192.168.100.9/24 scope global mlx_0_1 00:16:28.705 valid_lft forever preferred_lft forever 00:16:28.705 17:25:47 -- nvmf/common.sh@410 -- # return 0 00:16:28.705 17:25:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:28.705 17:25:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:28.705 17:25:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:28.705 17:25:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:28.705 17:25:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:28.705 17:25:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:28.705 17:25:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:28.705 17:25:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:28.705 17:25:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:28.705 17:25:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@104 -- # continue 2 00:16:28.705 17:25:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.705 17:25:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:28.705 17:25:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@104 -- # continue 2 00:16:28.705 17:25:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:28.705 17:25:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:28.705 17:25:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:28.705 17:25:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:28.705 17:25:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:28.705 17:25:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:28.705 192.168.100.9' 00:16:28.705 17:25:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:28.705 192.168.100.9' 00:16:28.705 17:25:47 -- nvmf/common.sh@445 -- # head -n 1 00:16:28.705 17:25:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:28.705 17:25:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:28.705 192.168.100.9' 00:16:28.705 17:25:47 -- nvmf/common.sh@446 -- # tail -n +2 00:16:28.705 17:25:47 -- nvmf/common.sh@446 -- # head -n 1 00:16:28.705 17:25:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:28.705 17:25:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:28.705 17:25:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:28.705 17:25:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:28.705 17:25:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:28.705 17:25:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:28.705 17:25:47 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:28.705 17:25:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:28.705 17:25:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.705 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 17:25:47 -- nvmf/common.sh@469 -- # nvmfpid=2667939 00:16:28.705 17:25:47 -- nvmf/common.sh@470 -- # waitforlisten 2667939 00:16:28.705 17:25:47 -- common/autotest_common.sh@829 -- # '[' -z 2667939 ']' 00:16:28.705 17:25:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.705 17:25:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.705 17:25:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.705 17:25:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.705 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 17:25:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:28.705 [2024-11-09 17:25:47.342207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.705 [2024-11-09 17:25:47.342256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.705 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.705 [2024-11-09 17:25:47.411356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.705 [2024-11-09 17:25:47.483912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.705 [2024-11-09 17:25:47.484012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.705 [2024-11-09 17:25:47.484021] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.705 [2024-11-09 17:25:47.484030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.705 [2024-11-09 17:25:47.484055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.705 17:25:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.705 17:25:48 -- common/autotest_common.sh@862 -- # return 0 00:16:28.705 17:25:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.705 17:25:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.705 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 17:25:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.705 17:25:48 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:28.705 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.705 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 [2024-11-09 17:25:48.235311] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xff0230/0xff4720) succeed. 00:16:28.705 [2024-11-09 17:25:48.244269] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xff1730/0x1035dc0) succeed. 00:16:28.705 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.705 17:25:48 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:28.705 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.705 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.706 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.706 17:25:48 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:28.706 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.706 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.706 [2024-11-09 17:25:48.296843] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:28.706 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.706 17:25:48 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:28.706 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.706 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.706 NULL1 00:16:28.706 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.706 17:25:48 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:28.706 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.706 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.706 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.706 17:25:48 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:28.706 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.706 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.706 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.706 17:25:48 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:28.706 [2024-11-09 17:25:48.352722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.706 [2024-11-09 17:25:48.352757] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668223 ] 00:16:28.706 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.966 Attached to nqn.2016-06.io.spdk:cnode1 00:16:28.966 Namespace ID: 1 size: 1GB 00:16:28.966 fused_ordering(0) 00:16:28.966 fused_ordering(1) 00:16:28.966 fused_ordering(2) 00:16:28.966 fused_ordering(3) 00:16:28.966 fused_ordering(4) 00:16:28.966 fused_ordering(5) 00:16:28.966 fused_ordering(6) 00:16:28.966 fused_ordering(7) 00:16:28.966 fused_ordering(8) 00:16:28.966 fused_ordering(9) 00:16:28.966 fused_ordering(10) 00:16:28.966 fused_ordering(11) 00:16:28.966 fused_ordering(12) 00:16:28.966 fused_ordering(13) 00:16:28.966 fused_ordering(14) 00:16:28.966 fused_ordering(15) 00:16:28.966 fused_ordering(16) 00:16:28.966 fused_ordering(17) 00:16:28.966 fused_ordering(18) 00:16:28.966 fused_ordering(19) 00:16:28.966 fused_ordering(20) 00:16:28.966 fused_ordering(21) 00:16:28.966 fused_ordering(22) 00:16:28.966 fused_ordering(23) 00:16:28.966 fused_ordering(24) 00:16:28.966 fused_ordering(25) 00:16:28.966 fused_ordering(26) 00:16:28.966 fused_ordering(27) 00:16:28.966 fused_ordering(28) 00:16:28.966 fused_ordering(29) 00:16:28.966 fused_ordering(30) 00:16:28.966 fused_ordering(31) 00:16:28.966 fused_ordering(32) 00:16:28.966 fused_ordering(33) 00:16:28.966 fused_ordering(34) 00:16:28.966 fused_ordering(35) 00:16:28.966 fused_ordering(36) 00:16:28.966 fused_ordering(37) 00:16:28.966 fused_ordering(38) 00:16:28.966 fused_ordering(39) 00:16:28.966 fused_ordering(40) 00:16:28.966 fused_ordering(41) 00:16:28.966 fused_ordering(42) 00:16:28.966 fused_ordering(43) 00:16:28.966 fused_ordering(44) 00:16:28.966 fused_ordering(45) 00:16:28.966 fused_ordering(46) 00:16:28.966 fused_ordering(47) 00:16:28.966 fused_ordering(48) 00:16:28.966 fused_ordering(49) 00:16:28.966 fused_ordering(50) 00:16:28.966 fused_ordering(51) 00:16:28.966 fused_ordering(52) 00:16:28.966 fused_ordering(53) 00:16:28.966 fused_ordering(54) 00:16:28.966 fused_ordering(55) 00:16:28.966 fused_ordering(56) 00:16:28.966 fused_ordering(57) 00:16:28.966 fused_ordering(58) 00:16:28.966 fused_ordering(59) 00:16:28.966 fused_ordering(60) 00:16:28.966 fused_ordering(61) 00:16:28.966 fused_ordering(62) 00:16:28.966 fused_ordering(63) 00:16:28.966 fused_ordering(64) 00:16:28.966 fused_ordering(65) 00:16:28.966 fused_ordering(66) 00:16:28.966 fused_ordering(67) 00:16:28.966 fused_ordering(68) 00:16:28.966 fused_ordering(69) 00:16:28.966 fused_ordering(70) 00:16:28.966 fused_ordering(71) 00:16:28.966 fused_ordering(72) 00:16:28.966 fused_ordering(73) 00:16:28.966 fused_ordering(74) 00:16:28.966 fused_ordering(75) 00:16:28.966 fused_ordering(76) 00:16:28.966 fused_ordering(77) 00:16:28.966 fused_ordering(78) 00:16:28.966 fused_ordering(79) 00:16:28.966 fused_ordering(80) 00:16:28.966 fused_ordering(81) 00:16:28.966 fused_ordering(82) 00:16:28.966 fused_ordering(83) 00:16:28.966 fused_ordering(84) 00:16:28.966 fused_ordering(85) 00:16:28.966 fused_ordering(86) 00:16:28.966 fused_ordering(87) 00:16:28.966 fused_ordering(88) 00:16:28.966 fused_ordering(89) 00:16:28.966 fused_ordering(90) 00:16:28.966 fused_ordering(91) 00:16:28.966 fused_ordering(92) 00:16:28.966 fused_ordering(93) 00:16:28.966 fused_ordering(94) 00:16:28.966 fused_ordering(95) 00:16:28.966 fused_ordering(96) 00:16:28.966 fused_ordering(97) 00:16:28.966 fused_ordering(98) 00:16:28.966 fused_ordering(99) 00:16:28.966 fused_ordering(100) 00:16:28.966 fused_ordering(101) 00:16:28.966 fused_ordering(102) 00:16:28.966 fused_ordering(103) 00:16:28.966 fused_ordering(104) 00:16:28.966 fused_ordering(105) 00:16:28.966 fused_ordering(106) 00:16:28.966 fused_ordering(107) 00:16:28.966 fused_ordering(108) 00:16:28.966 fused_ordering(109) 00:16:28.966 fused_ordering(110) 00:16:28.966 fused_ordering(111) 00:16:28.966 fused_ordering(112) 00:16:28.966 fused_ordering(113) 00:16:28.966 fused_ordering(114) 00:16:28.966 fused_ordering(115) 00:16:28.966 fused_ordering(116) 00:16:28.966 fused_ordering(117) 00:16:28.966 fused_ordering(118) 00:16:28.966 fused_ordering(119) 00:16:28.966 fused_ordering(120) 00:16:28.966 fused_ordering(121) 00:16:28.966 fused_ordering(122) 00:16:28.966 fused_ordering(123) 00:16:28.966 fused_ordering(124) 00:16:28.966 fused_ordering(125) 00:16:28.966 fused_ordering(126) 00:16:28.966 fused_ordering(127) 00:16:28.966 fused_ordering(128) 00:16:28.966 fused_ordering(129) 00:16:28.966 fused_ordering(130) 00:16:28.966 fused_ordering(131) 00:16:28.966 fused_ordering(132) 00:16:28.966 fused_ordering(133) 00:16:28.966 fused_ordering(134) 00:16:28.966 fused_ordering(135) 00:16:28.966 fused_ordering(136) 00:16:28.966 fused_ordering(137) 00:16:28.966 fused_ordering(138) 00:16:28.966 fused_ordering(139) 00:16:28.966 fused_ordering(140) 00:16:28.966 fused_ordering(141) 00:16:28.966 fused_ordering(142) 00:16:28.966 fused_ordering(143) 00:16:28.966 fused_ordering(144) 00:16:28.966 fused_ordering(145) 00:16:28.966 fused_ordering(146) 00:16:28.966 fused_ordering(147) 00:16:28.966 fused_ordering(148) 00:16:28.966 fused_ordering(149) 00:16:28.966 fused_ordering(150) 00:16:28.966 fused_ordering(151) 00:16:28.966 fused_ordering(152) 00:16:28.966 fused_ordering(153) 00:16:28.966 fused_ordering(154) 00:16:28.966 fused_ordering(155) 00:16:28.966 fused_ordering(156) 00:16:28.966 fused_ordering(157) 00:16:28.966 fused_ordering(158) 00:16:28.966 fused_ordering(159) 00:16:28.966 fused_ordering(160) 00:16:28.966 fused_ordering(161) 00:16:28.966 fused_ordering(162) 00:16:28.966 fused_ordering(163) 00:16:28.966 fused_ordering(164) 00:16:28.966 fused_ordering(165) 00:16:28.966 fused_ordering(166) 00:16:28.966 fused_ordering(167) 00:16:28.966 fused_ordering(168) 00:16:28.966 fused_ordering(169) 00:16:28.966 fused_ordering(170) 00:16:28.966 fused_ordering(171) 00:16:28.966 fused_ordering(172) 00:16:28.966 fused_ordering(173) 00:16:28.966 fused_ordering(174) 00:16:28.966 fused_ordering(175) 00:16:28.966 fused_ordering(176) 00:16:28.966 fused_ordering(177) 00:16:28.966 fused_ordering(178) 00:16:28.966 fused_ordering(179) 00:16:28.966 fused_ordering(180) 00:16:28.966 fused_ordering(181) 00:16:28.966 fused_ordering(182) 00:16:28.966 fused_ordering(183) 00:16:28.966 fused_ordering(184) 00:16:28.966 fused_ordering(185) 00:16:28.966 fused_ordering(186) 00:16:28.966 fused_ordering(187) 00:16:28.966 fused_ordering(188) 00:16:28.966 fused_ordering(189) 00:16:28.966 fused_ordering(190) 00:16:28.966 fused_ordering(191) 00:16:28.966 fused_ordering(192) 00:16:28.966 fused_ordering(193) 00:16:28.966 fused_ordering(194) 00:16:28.966 fused_ordering(195) 00:16:28.966 fused_ordering(196) 00:16:28.966 fused_ordering(197) 00:16:28.966 fused_ordering(198) 00:16:28.966 fused_ordering(199) 00:16:28.966 fused_ordering(200) 00:16:28.966 fused_ordering(201) 00:16:28.966 fused_ordering(202) 00:16:28.966 fused_ordering(203) 00:16:28.966 fused_ordering(204) 00:16:28.966 fused_ordering(205) 00:16:28.966 fused_ordering(206) 00:16:28.966 fused_ordering(207) 00:16:28.966 fused_ordering(208) 00:16:28.966 fused_ordering(209) 00:16:28.966 fused_ordering(210) 00:16:28.966 fused_ordering(211) 00:16:28.966 fused_ordering(212) 00:16:28.966 fused_ordering(213) 00:16:28.966 fused_ordering(214) 00:16:28.966 fused_ordering(215) 00:16:28.966 fused_ordering(216) 00:16:28.966 fused_ordering(217) 00:16:28.966 fused_ordering(218) 00:16:28.966 fused_ordering(219) 00:16:28.966 fused_ordering(220) 00:16:28.966 fused_ordering(221) 00:16:28.966 fused_ordering(222) 00:16:28.966 fused_ordering(223) 00:16:28.966 fused_ordering(224) 00:16:28.966 fused_ordering(225) 00:16:28.967 fused_ordering(226) 00:16:28.967 fused_ordering(227) 00:16:28.967 fused_ordering(228) 00:16:28.967 fused_ordering(229) 00:16:28.967 fused_ordering(230) 00:16:28.967 fused_ordering(231) 00:16:28.967 fused_ordering(232) 00:16:28.967 fused_ordering(233) 00:16:28.967 fused_ordering(234) 00:16:28.967 fused_ordering(235) 00:16:28.967 fused_ordering(236) 00:16:28.967 fused_ordering(237) 00:16:28.967 fused_ordering(238) 00:16:28.967 fused_ordering(239) 00:16:28.967 fused_ordering(240) 00:16:28.967 fused_ordering(241) 00:16:28.967 fused_ordering(242) 00:16:28.967 fused_ordering(243) 00:16:28.967 fused_ordering(244) 00:16:28.967 fused_ordering(245) 00:16:28.967 fused_ordering(246) 00:16:28.967 fused_ordering(247) 00:16:28.967 fused_ordering(248) 00:16:28.967 fused_ordering(249) 00:16:28.967 fused_ordering(250) 00:16:28.967 fused_ordering(251) 00:16:28.967 fused_ordering(252) 00:16:28.967 fused_ordering(253) 00:16:28.967 fused_ordering(254) 00:16:28.967 fused_ordering(255) 00:16:28.967 fused_ordering(256) 00:16:28.967 fused_ordering(257) 00:16:28.967 fused_ordering(258) 00:16:28.967 fused_ordering(259) 00:16:28.967 fused_ordering(260) 00:16:28.967 fused_ordering(261) 00:16:28.967 fused_ordering(262) 00:16:28.967 fused_ordering(263) 00:16:28.967 fused_ordering(264) 00:16:28.967 fused_ordering(265) 00:16:28.967 fused_ordering(266) 00:16:28.967 fused_ordering(267) 00:16:28.967 fused_ordering(268) 00:16:28.967 fused_ordering(269) 00:16:28.967 fused_ordering(270) 00:16:28.967 fused_ordering(271) 00:16:28.967 fused_ordering(272) 00:16:28.967 fused_ordering(273) 00:16:28.967 fused_ordering(274) 00:16:28.967 fused_ordering(275) 00:16:28.967 fused_ordering(276) 00:16:28.967 fused_ordering(277) 00:16:28.967 fused_ordering(278) 00:16:28.967 fused_ordering(279) 00:16:28.967 fused_ordering(280) 00:16:28.967 fused_ordering(281) 00:16:28.967 fused_ordering(282) 00:16:28.967 fused_ordering(283) 00:16:28.967 fused_ordering(284) 00:16:28.967 fused_ordering(285) 00:16:28.967 fused_ordering(286) 00:16:28.967 fused_ordering(287) 00:16:28.967 fused_ordering(288) 00:16:28.967 fused_ordering(289) 00:16:28.967 fused_ordering(290) 00:16:28.967 fused_ordering(291) 00:16:28.967 fused_ordering(292) 00:16:28.967 fused_ordering(293) 00:16:28.967 fused_ordering(294) 00:16:28.967 fused_ordering(295) 00:16:28.967 fused_ordering(296) 00:16:28.967 fused_ordering(297) 00:16:28.967 fused_ordering(298) 00:16:28.967 fused_ordering(299) 00:16:28.967 fused_ordering(300) 00:16:28.967 fused_ordering(301) 00:16:28.967 fused_ordering(302) 00:16:28.967 fused_ordering(303) 00:16:28.967 fused_ordering(304) 00:16:28.967 fused_ordering(305) 00:16:28.967 fused_ordering(306) 00:16:28.967 fused_ordering(307) 00:16:28.967 fused_ordering(308) 00:16:28.967 fused_ordering(309) 00:16:28.967 fused_ordering(310) 00:16:28.967 fused_ordering(311) 00:16:28.967 fused_ordering(312) 00:16:28.967 fused_ordering(313) 00:16:28.967 fused_ordering(314) 00:16:28.967 fused_ordering(315) 00:16:28.967 fused_ordering(316) 00:16:28.967 fused_ordering(317) 00:16:28.967 fused_ordering(318) 00:16:28.967 fused_ordering(319) 00:16:28.967 fused_ordering(320) 00:16:28.967 fused_ordering(321) 00:16:28.967 fused_ordering(322) 00:16:28.967 fused_ordering(323) 00:16:28.967 fused_ordering(324) 00:16:28.967 fused_ordering(325) 00:16:28.967 fused_ordering(326) 00:16:28.967 fused_ordering(327) 00:16:28.967 fused_ordering(328) 00:16:28.967 fused_ordering(329) 00:16:28.967 fused_ordering(330) 00:16:28.967 fused_ordering(331) 00:16:28.967 fused_ordering(332) 00:16:28.967 fused_ordering(333) 00:16:28.967 fused_ordering(334) 00:16:28.967 fused_ordering(335) 00:16:28.967 fused_ordering(336) 00:16:28.967 fused_ordering(337) 00:16:28.967 fused_ordering(338) 00:16:28.967 fused_ordering(339) 00:16:28.967 fused_ordering(340) 00:16:28.967 fused_ordering(341) 00:16:28.967 fused_ordering(342) 00:16:28.967 fused_ordering(343) 00:16:28.967 fused_ordering(344) 00:16:28.967 fused_ordering(345) 00:16:28.967 fused_ordering(346) 00:16:28.967 fused_ordering(347) 00:16:28.967 fused_ordering(348) 00:16:28.967 fused_ordering(349) 00:16:28.967 fused_ordering(350) 00:16:28.967 fused_ordering(351) 00:16:28.967 fused_ordering(352) 00:16:28.967 fused_ordering(353) 00:16:28.967 fused_ordering(354) 00:16:28.967 fused_ordering(355) 00:16:28.967 fused_ordering(356) 00:16:28.967 fused_ordering(357) 00:16:28.967 fused_ordering(358) 00:16:28.967 fused_ordering(359) 00:16:28.967 fused_ordering(360) 00:16:28.967 fused_ordering(361) 00:16:28.967 fused_ordering(362) 00:16:28.967 fused_ordering(363) 00:16:28.967 fused_ordering(364) 00:16:28.967 fused_ordering(365) 00:16:28.967 fused_ordering(366) 00:16:28.967 fused_ordering(367) 00:16:28.967 fused_ordering(368) 00:16:28.967 fused_ordering(369) 00:16:28.967 fused_ordering(370) 00:16:28.967 fused_ordering(371) 00:16:28.967 fused_ordering(372) 00:16:28.967 fused_ordering(373) 00:16:28.967 fused_ordering(374) 00:16:28.967 fused_ordering(375) 00:16:28.967 fused_ordering(376) 00:16:28.967 fused_ordering(377) 00:16:28.967 fused_ordering(378) 00:16:28.967 fused_ordering(379) 00:16:28.967 fused_ordering(380) 00:16:28.967 fused_ordering(381) 00:16:28.967 fused_ordering(382) 00:16:28.967 fused_ordering(383) 00:16:28.967 fused_ordering(384) 00:16:28.967 fused_ordering(385) 00:16:28.967 fused_ordering(386) 00:16:28.967 fused_ordering(387) 00:16:28.967 fused_ordering(388) 00:16:28.967 fused_ordering(389) 00:16:28.967 fused_ordering(390) 00:16:28.967 fused_ordering(391) 00:16:28.967 fused_ordering(392) 00:16:28.967 fused_ordering(393) 00:16:28.967 fused_ordering(394) 00:16:28.967 fused_ordering(395) 00:16:28.967 fused_ordering(396) 00:16:28.967 fused_ordering(397) 00:16:28.967 fused_ordering(398) 00:16:28.967 fused_ordering(399) 00:16:28.967 fused_ordering(400) 00:16:28.967 fused_ordering(401) 00:16:28.967 fused_ordering(402) 00:16:28.967 fused_ordering(403) 00:16:28.967 fused_ordering(404) 00:16:28.967 fused_ordering(405) 00:16:28.967 fused_ordering(406) 00:16:28.967 fused_ordering(407) 00:16:28.967 fused_ordering(408) 00:16:28.967 fused_ordering(409) 00:16:28.967 fused_ordering(410) 00:16:28.967 fused_ordering(411) 00:16:28.967 fused_ordering(412) 00:16:28.967 fused_ordering(413) 00:16:28.967 fused_ordering(414) 00:16:28.967 fused_ordering(415) 00:16:28.967 fused_ordering(416) 00:16:28.967 fused_ordering(417) 00:16:28.967 fused_ordering(418) 00:16:28.967 fused_ordering(419) 00:16:28.967 fused_ordering(420) 00:16:28.967 fused_ordering(421) 00:16:28.967 fused_ordering(422) 00:16:28.967 fused_ordering(423) 00:16:28.967 fused_ordering(424) 00:16:28.967 fused_ordering(425) 00:16:28.967 fused_ordering(426) 00:16:28.967 fused_ordering(427) 00:16:28.967 fused_ordering(428) 00:16:28.967 fused_ordering(429) 00:16:28.967 fused_ordering(430) 00:16:28.967 fused_ordering(431) 00:16:28.967 fused_ordering(432) 00:16:28.967 fused_ordering(433) 00:16:28.967 fused_ordering(434) 00:16:28.967 fused_ordering(435) 00:16:28.967 fused_ordering(436) 00:16:28.967 fused_ordering(437) 00:16:28.967 fused_ordering(438) 00:16:28.967 fused_ordering(439) 00:16:28.967 fused_ordering(440) 00:16:28.967 fused_ordering(441) 00:16:28.967 fused_ordering(442) 00:16:28.967 fused_ordering(443) 00:16:28.967 fused_ordering(444) 00:16:28.967 fused_ordering(445) 00:16:28.967 fused_ordering(446) 00:16:28.967 fused_ordering(447) 00:16:28.967 fused_ordering(448) 00:16:28.967 fused_ordering(449) 00:16:28.967 fused_ordering(450) 00:16:28.967 fused_ordering(451) 00:16:28.967 fused_ordering(452) 00:16:28.967 fused_ordering(453) 00:16:28.967 fused_ordering(454) 00:16:28.967 fused_ordering(455) 00:16:28.967 fused_ordering(456) 00:16:28.967 fused_ordering(457) 00:16:28.967 fused_ordering(458) 00:16:28.967 fused_ordering(459) 00:16:28.967 fused_ordering(460) 00:16:28.967 fused_ordering(461) 00:16:28.967 fused_ordering(462) 00:16:28.967 fused_ordering(463) 00:16:28.967 fused_ordering(464) 00:16:28.967 fused_ordering(465) 00:16:28.967 fused_ordering(466) 00:16:28.967 fused_ordering(467) 00:16:28.967 fused_ordering(468) 00:16:28.967 fused_ordering(469) 00:16:28.967 fused_ordering(470) 00:16:28.967 fused_ordering(471) 00:16:28.967 fused_ordering(472) 00:16:28.967 fused_ordering(473) 00:16:28.967 fused_ordering(474) 00:16:28.967 fused_ordering(475) 00:16:28.968 fused_ordering(476) 00:16:28.968 fused_ordering(477) 00:16:28.968 fused_ordering(478) 00:16:28.968 fused_ordering(479) 00:16:28.968 fused_ordering(480) 00:16:28.968 fused_ordering(481) 00:16:28.968 fused_ordering(482) 00:16:28.968 fused_ordering(483) 00:16:28.968 fused_ordering(484) 00:16:28.968 fused_ordering(485) 00:16:28.968 fused_ordering(486) 00:16:28.968 fused_ordering(487) 00:16:28.968 fused_ordering(488) 00:16:28.968 fused_ordering(489) 00:16:28.968 fused_ordering(490) 00:16:28.968 fused_ordering(491) 00:16:28.968 fused_ordering(492) 00:16:28.968 fused_ordering(493) 00:16:28.968 fused_ordering(494) 00:16:28.968 fused_ordering(495) 00:16:28.968 fused_ordering(496) 00:16:28.968 fused_ordering(497) 00:16:28.968 fused_ordering(498) 00:16:28.968 fused_ordering(499) 00:16:28.968 fused_ordering(500) 00:16:28.968 fused_ordering(501) 00:16:28.968 fused_ordering(502) 00:16:28.968 fused_ordering(503) 00:16:28.968 fused_ordering(504) 00:16:28.968 fused_ordering(505) 00:16:28.968 fused_ordering(506) 00:16:28.968 fused_ordering(507) 00:16:28.968 fused_ordering(508) 00:16:28.968 fused_ordering(509) 00:16:28.968 fused_ordering(510) 00:16:28.968 fused_ordering(511) 00:16:28.968 fused_ordering(512) 00:16:28.968 fused_ordering(513) 00:16:28.968 fused_ordering(514) 00:16:28.968 fused_ordering(515) 00:16:28.968 fused_ordering(516) 00:16:28.968 fused_ordering(517) 00:16:28.968 fused_ordering(518) 00:16:28.968 fused_ordering(519) 00:16:28.968 fused_ordering(520) 00:16:28.968 fused_ordering(521) 00:16:28.968 fused_ordering(522) 00:16:28.968 fused_ordering(523) 00:16:28.968 fused_ordering(524) 00:16:28.968 fused_ordering(525) 00:16:28.968 fused_ordering(526) 00:16:28.968 fused_ordering(527) 00:16:28.968 fused_ordering(528) 00:16:28.968 fused_ordering(529) 00:16:28.968 fused_ordering(530) 00:16:28.968 fused_ordering(531) 00:16:28.968 fused_ordering(532) 00:16:28.968 fused_ordering(533) 00:16:28.968 fused_ordering(534) 00:16:28.968 fused_ordering(535) 00:16:28.968 fused_ordering(536) 00:16:28.968 fused_ordering(537) 00:16:28.968 fused_ordering(538) 00:16:28.968 fused_ordering(539) 00:16:28.968 fused_ordering(540) 00:16:28.968 fused_ordering(541) 00:16:28.968 fused_ordering(542) 00:16:28.968 fused_ordering(543) 00:16:28.968 fused_ordering(544) 00:16:28.968 fused_ordering(545) 00:16:28.968 fused_ordering(546) 00:16:28.968 fused_ordering(547) 00:16:28.968 fused_ordering(548) 00:16:28.968 fused_ordering(549) 00:16:28.968 fused_ordering(550) 00:16:28.968 fused_ordering(551) 00:16:28.968 fused_ordering(552) 00:16:28.968 fused_ordering(553) 00:16:28.968 fused_ordering(554) 00:16:28.968 fused_ordering(555) 00:16:28.968 fused_ordering(556) 00:16:28.968 fused_ordering(557) 00:16:28.968 fused_ordering(558) 00:16:28.968 fused_ordering(559) 00:16:28.968 fused_ordering(560) 00:16:28.968 fused_ordering(561) 00:16:28.968 fused_ordering(562) 00:16:28.968 fused_ordering(563) 00:16:28.968 fused_ordering(564) 00:16:28.968 fused_ordering(565) 00:16:28.968 fused_ordering(566) 00:16:28.968 fused_ordering(567) 00:16:28.968 fused_ordering(568) 00:16:28.968 fused_ordering(569) 00:16:28.968 fused_ordering(570) 00:16:28.968 fused_ordering(571) 00:16:28.968 fused_ordering(572) 00:16:28.968 fused_ordering(573) 00:16:28.968 fused_ordering(574) 00:16:28.968 fused_ordering(575) 00:16:28.968 fused_ordering(576) 00:16:28.968 fused_ordering(577) 00:16:28.968 fused_ordering(578) 00:16:28.968 fused_ordering(579) 00:16:28.968 fused_ordering(580) 00:16:28.968 fused_ordering(581) 00:16:28.968 fused_ordering(582) 00:16:28.968 fused_ordering(583) 00:16:28.968 fused_ordering(584) 00:16:28.968 fused_ordering(585) 00:16:28.968 fused_ordering(586) 00:16:28.968 fused_ordering(587) 00:16:28.968 fused_ordering(588) 00:16:28.968 fused_ordering(589) 00:16:28.968 fused_ordering(590) 00:16:28.968 fused_ordering(591) 00:16:28.968 fused_ordering(592) 00:16:28.968 fused_ordering(593) 00:16:28.968 fused_ordering(594) 00:16:28.968 fused_ordering(595) 00:16:28.968 fused_ordering(596) 00:16:28.968 fused_ordering(597) 00:16:28.968 fused_ordering(598) 00:16:28.968 fused_ordering(599) 00:16:28.968 fused_ordering(600) 00:16:28.968 fused_ordering(601) 00:16:28.968 fused_ordering(602) 00:16:28.968 fused_ordering(603) 00:16:28.968 fused_ordering(604) 00:16:28.968 fused_ordering(605) 00:16:28.968 fused_ordering(606) 00:16:28.968 fused_ordering(607) 00:16:28.968 fused_ordering(608) 00:16:28.968 fused_ordering(609) 00:16:28.968 fused_ordering(610) 00:16:28.968 fused_ordering(611) 00:16:28.968 fused_ordering(612) 00:16:28.968 fused_ordering(613) 00:16:28.968 fused_ordering(614) 00:16:28.968 fused_ordering(615) 00:16:29.227 fused_ordering(616) 00:16:29.227 fused_ordering(617) 00:16:29.227 fused_ordering(618) 00:16:29.227 fused_ordering(619) 00:16:29.227 fused_ordering(620) 00:16:29.227 fused_ordering(621) 00:16:29.227 fused_ordering(622) 00:16:29.227 fused_ordering(623) 00:16:29.228 fused_ordering(624) 00:16:29.228 fused_ordering(625) 00:16:29.228 fused_ordering(626) 00:16:29.228 fused_ordering(627) 00:16:29.228 fused_ordering(628) 00:16:29.228 fused_ordering(629) 00:16:29.228 fused_ordering(630) 00:16:29.228 fused_ordering(631) 00:16:29.228 fused_ordering(632) 00:16:29.228 fused_ordering(633) 00:16:29.228 fused_ordering(634) 00:16:29.228 fused_ordering(635) 00:16:29.228 fused_ordering(636) 00:16:29.228 fused_ordering(637) 00:16:29.228 fused_ordering(638) 00:16:29.228 fused_ordering(639) 00:16:29.228 fused_ordering(640) 00:16:29.228 fused_ordering(641) 00:16:29.228 fused_ordering(642) 00:16:29.228 fused_ordering(643) 00:16:29.228 fused_ordering(644) 00:16:29.228 fused_ordering(645) 00:16:29.228 fused_ordering(646) 00:16:29.228 fused_ordering(647) 00:16:29.228 fused_ordering(648) 00:16:29.228 fused_ordering(649) 00:16:29.228 fused_ordering(650) 00:16:29.228 fused_ordering(651) 00:16:29.228 fused_ordering(652) 00:16:29.228 fused_ordering(653) 00:16:29.228 fused_ordering(654) 00:16:29.228 fused_ordering(655) 00:16:29.228 fused_ordering(656) 00:16:29.228 fused_ordering(657) 00:16:29.228 fused_ordering(658) 00:16:29.228 fused_ordering(659) 00:16:29.228 fused_ordering(660) 00:16:29.228 fused_ordering(661) 00:16:29.228 fused_ordering(662) 00:16:29.228 fused_ordering(663) 00:16:29.228 fused_ordering(664) 00:16:29.228 fused_ordering(665) 00:16:29.228 fused_ordering(666) 00:16:29.228 fused_ordering(667) 00:16:29.228 fused_ordering(668) 00:16:29.228 fused_ordering(669) 00:16:29.228 fused_ordering(670) 00:16:29.228 fused_ordering(671) 00:16:29.228 fused_ordering(672) 00:16:29.228 fused_ordering(673) 00:16:29.228 fused_ordering(674) 00:16:29.228 fused_ordering(675) 00:16:29.228 fused_ordering(676) 00:16:29.228 fused_ordering(677) 00:16:29.228 fused_ordering(678) 00:16:29.228 fused_ordering(679) 00:16:29.228 fused_ordering(680) 00:16:29.228 fused_ordering(681) 00:16:29.228 fused_ordering(682) 00:16:29.228 fused_ordering(683) 00:16:29.228 fused_ordering(684) 00:16:29.228 fused_ordering(685) 00:16:29.228 fused_ordering(686) 00:16:29.228 fused_ordering(687) 00:16:29.228 fused_ordering(688) 00:16:29.228 fused_ordering(689) 00:16:29.228 fused_ordering(690) 00:16:29.228 fused_ordering(691) 00:16:29.228 fused_ordering(692) 00:16:29.228 fused_ordering(693) 00:16:29.228 fused_ordering(694) 00:16:29.228 fused_ordering(695) 00:16:29.228 fused_ordering(696) 00:16:29.228 fused_ordering(697) 00:16:29.228 fused_ordering(698) 00:16:29.228 fused_ordering(699) 00:16:29.228 fused_ordering(700) 00:16:29.228 fused_ordering(701) 00:16:29.228 fused_ordering(702) 00:16:29.228 fused_ordering(703) 00:16:29.228 fused_ordering(704) 00:16:29.228 fused_ordering(705) 00:16:29.228 fused_ordering(706) 00:16:29.228 fused_ordering(707) 00:16:29.228 fused_ordering(708) 00:16:29.228 fused_ordering(709) 00:16:29.228 fused_ordering(710) 00:16:29.228 fused_ordering(711) 00:16:29.228 fused_ordering(712) 00:16:29.228 fused_ordering(713) 00:16:29.228 fused_ordering(714) 00:16:29.228 fused_ordering(715) 00:16:29.228 fused_ordering(716) 00:16:29.228 fused_ordering(717) 00:16:29.228 fused_ordering(718) 00:16:29.228 fused_ordering(719) 00:16:29.228 fused_ordering(720) 00:16:29.228 fused_ordering(721) 00:16:29.228 fused_ordering(722) 00:16:29.228 fused_ordering(723) 00:16:29.228 fused_ordering(724) 00:16:29.228 fused_ordering(725) 00:16:29.228 fused_ordering(726) 00:16:29.228 fused_ordering(727) 00:16:29.228 fused_ordering(728) 00:16:29.228 fused_ordering(729) 00:16:29.228 fused_ordering(730) 00:16:29.228 fused_ordering(731) 00:16:29.228 fused_ordering(732) 00:16:29.228 fused_ordering(733) 00:16:29.228 fused_ordering(734) 00:16:29.228 fused_ordering(735) 00:16:29.228 fused_ordering(736) 00:16:29.228 fused_ordering(737) 00:16:29.228 fused_ordering(738) 00:16:29.228 fused_ordering(739) 00:16:29.228 fused_ordering(740) 00:16:29.228 fused_ordering(741) 00:16:29.228 fused_ordering(742) 00:16:29.228 fused_ordering(743) 00:16:29.228 fused_ordering(744) 00:16:29.228 fused_ordering(745) 00:16:29.228 fused_ordering(746) 00:16:29.228 fused_ordering(747) 00:16:29.228 fused_ordering(748) 00:16:29.228 fused_ordering(749) 00:16:29.228 fused_ordering(750) 00:16:29.228 fused_ordering(751) 00:16:29.228 fused_ordering(752) 00:16:29.228 fused_ordering(753) 00:16:29.228 fused_ordering(754) 00:16:29.228 fused_ordering(755) 00:16:29.228 fused_ordering(756) 00:16:29.228 fused_ordering(757) 00:16:29.228 fused_ordering(758) 00:16:29.228 fused_ordering(759) 00:16:29.228 fused_ordering(760) 00:16:29.228 fused_ordering(761) 00:16:29.228 fused_ordering(762) 00:16:29.228 fused_ordering(763) 00:16:29.228 fused_ordering(764) 00:16:29.228 fused_ordering(765) 00:16:29.228 fused_ordering(766) 00:16:29.228 fused_ordering(767) 00:16:29.228 fused_ordering(768) 00:16:29.228 fused_ordering(769) 00:16:29.228 fused_ordering(770) 00:16:29.228 fused_ordering(771) 00:16:29.228 fused_ordering(772) 00:16:29.228 fused_ordering(773) 00:16:29.228 fused_ordering(774) 00:16:29.228 fused_ordering(775) 00:16:29.228 fused_ordering(776) 00:16:29.228 fused_ordering(777) 00:16:29.228 fused_ordering(778) 00:16:29.228 fused_ordering(779) 00:16:29.228 fused_ordering(780) 00:16:29.228 fused_ordering(781) 00:16:29.228 fused_ordering(782) 00:16:29.228 fused_ordering(783) 00:16:29.228 fused_ordering(784) 00:16:29.228 fused_ordering(785) 00:16:29.228 fused_ordering(786) 00:16:29.228 fused_ordering(787) 00:16:29.228 fused_ordering(788) 00:16:29.228 fused_ordering(789) 00:16:29.228 fused_ordering(790) 00:16:29.228 fused_ordering(791) 00:16:29.228 fused_ordering(792) 00:16:29.228 fused_ordering(793) 00:16:29.228 fused_ordering(794) 00:16:29.228 fused_ordering(795) 00:16:29.228 fused_ordering(796) 00:16:29.228 fused_ordering(797) 00:16:29.228 fused_ordering(798) 00:16:29.228 fused_ordering(799) 00:16:29.228 fused_ordering(800) 00:16:29.228 fused_ordering(801) 00:16:29.228 fused_ordering(802) 00:16:29.228 fused_ordering(803) 00:16:29.228 fused_ordering(804) 00:16:29.228 fused_ordering(805) 00:16:29.228 fused_ordering(806) 00:16:29.228 fused_ordering(807) 00:16:29.228 fused_ordering(808) 00:16:29.228 fused_ordering(809) 00:16:29.228 fused_ordering(810) 00:16:29.228 fused_ordering(811) 00:16:29.228 fused_ordering(812) 00:16:29.228 fused_ordering(813) 00:16:29.228 fused_ordering(814) 00:16:29.228 fused_ordering(815) 00:16:29.228 fused_ordering(816) 00:16:29.228 fused_ordering(817) 00:16:29.228 fused_ordering(818) 00:16:29.228 fused_ordering(819) 00:16:29.228 fused_ordering(820) 00:16:29.489 fused_ordering(821) 00:16:29.489 fused_ordering(822) 00:16:29.489 fused_ordering(823) 00:16:29.489 fused_ordering(824) 00:16:29.489 fused_ordering(825) 00:16:29.489 fused_ordering(826) 00:16:29.489 fused_ordering(827) 00:16:29.489 fused_ordering(828) 00:16:29.489 fused_ordering(829) 00:16:29.489 fused_ordering(830) 00:16:29.489 fused_ordering(831) 00:16:29.489 fused_ordering(832) 00:16:29.489 fused_ordering(833) 00:16:29.489 fused_ordering(834) 00:16:29.489 fused_ordering(835) 00:16:29.489 fused_ordering(836) 00:16:29.489 fused_ordering(837) 00:16:29.489 fused_ordering(838) 00:16:29.489 fused_ordering(839) 00:16:29.489 fused_ordering(840) 00:16:29.489 fused_ordering(841) 00:16:29.489 fused_ordering(842) 00:16:29.489 fused_ordering(843) 00:16:29.489 fused_ordering(844) 00:16:29.489 fused_ordering(845) 00:16:29.489 fused_ordering(846) 00:16:29.489 fused_ordering(847) 00:16:29.489 fused_ordering(848) 00:16:29.489 fused_ordering(849) 00:16:29.489 fused_ordering(850) 00:16:29.489 fused_ordering(851) 00:16:29.489 fused_ordering(852) 00:16:29.489 fused_ordering(853) 00:16:29.489 fused_ordering(854) 00:16:29.489 fused_ordering(855) 00:16:29.489 fused_ordering(856) 00:16:29.489 fused_ordering(857) 00:16:29.489 fused_ordering(858) 00:16:29.489 fused_ordering(859) 00:16:29.489 fused_ordering(860) 00:16:29.489 fused_ordering(861) 00:16:29.489 fused_ordering(862) 00:16:29.489 fused_ordering(863) 00:16:29.489 fused_ordering(864) 00:16:29.489 fused_ordering(865) 00:16:29.489 fused_ordering(866) 00:16:29.489 fused_ordering(867) 00:16:29.489 fused_ordering(868) 00:16:29.489 fused_ordering(869) 00:16:29.489 fused_ordering(870) 00:16:29.489 fused_ordering(871) 00:16:29.489 fused_ordering(872) 00:16:29.489 fused_ordering(873) 00:16:29.489 fused_ordering(874) 00:16:29.489 fused_ordering(875) 00:16:29.489 fused_ordering(876) 00:16:29.489 fused_ordering(877) 00:16:29.489 fused_ordering(878) 00:16:29.489 fused_ordering(879) 00:16:29.489 fused_ordering(880) 00:16:29.489 fused_ordering(881) 00:16:29.489 fused_ordering(882) 00:16:29.489 fused_ordering(883) 00:16:29.489 fused_ordering(884) 00:16:29.489 fused_ordering(885) 00:16:29.489 fused_ordering(886) 00:16:29.489 fused_ordering(887) 00:16:29.489 fused_ordering(888) 00:16:29.489 fused_ordering(889) 00:16:29.489 fused_ordering(890) 00:16:29.489 fused_ordering(891) 00:16:29.489 fused_ordering(892) 00:16:29.489 fused_ordering(893) 00:16:29.489 fused_ordering(894) 00:16:29.489 fused_ordering(895) 00:16:29.489 fused_ordering(896) 00:16:29.489 fused_ordering(897) 00:16:29.489 fused_ordering(898) 00:16:29.489 fused_ordering(899) 00:16:29.489 fused_ordering(900) 00:16:29.489 fused_ordering(901) 00:16:29.489 fused_ordering(902) 00:16:29.489 fused_ordering(903) 00:16:29.489 fused_ordering(904) 00:16:29.489 fused_ordering(905) 00:16:29.489 fused_ordering(906) 00:16:29.489 fused_ordering(907) 00:16:29.489 fused_ordering(908) 00:16:29.489 fused_ordering(909) 00:16:29.489 fused_ordering(910) 00:16:29.489 fused_ordering(911) 00:16:29.489 fused_ordering(912) 00:16:29.489 fused_ordering(913) 00:16:29.489 fused_ordering(914) 00:16:29.489 fused_ordering(915) 00:16:29.489 fused_ordering(916) 00:16:29.489 fused_ordering(917) 00:16:29.489 fused_ordering(918) 00:16:29.489 fused_ordering(919) 00:16:29.489 fused_ordering(920) 00:16:29.489 fused_ordering(921) 00:16:29.489 fused_ordering(922) 00:16:29.489 fused_ordering(923) 00:16:29.489 fused_ordering(924) 00:16:29.489 fused_ordering(925) 00:16:29.489 fused_ordering(926) 00:16:29.489 fused_ordering(927) 00:16:29.489 fused_ordering(928) 00:16:29.489 fused_ordering(929) 00:16:29.489 fused_ordering(930) 00:16:29.489 fused_ordering(931) 00:16:29.489 fused_ordering(932) 00:16:29.489 fused_ordering(933) 00:16:29.489 fused_ordering(934) 00:16:29.489 fused_ordering(935) 00:16:29.489 fused_ordering(936) 00:16:29.489 fused_ordering(937) 00:16:29.489 fused_ordering(938) 00:16:29.489 fused_ordering(939) 00:16:29.489 fused_ordering(940) 00:16:29.489 fused_ordering(941) 00:16:29.489 fused_ordering(942) 00:16:29.489 fused_ordering(943) 00:16:29.489 fused_ordering(944) 00:16:29.489 fused_ordering(945) 00:16:29.489 fused_ordering(946) 00:16:29.489 fused_ordering(947) 00:16:29.489 fused_ordering(948) 00:16:29.489 fused_ordering(949) 00:16:29.489 fused_ordering(950) 00:16:29.489 fused_ordering(951) 00:16:29.489 fused_ordering(952) 00:16:29.489 fused_ordering(953) 00:16:29.489 fused_ordering(954) 00:16:29.489 fused_ordering(955) 00:16:29.489 fused_ordering(956) 00:16:29.489 fused_ordering(957) 00:16:29.489 fused_ordering(958) 00:16:29.489 fused_ordering(959) 00:16:29.489 fused_ordering(960) 00:16:29.489 fused_ordering(961) 00:16:29.489 fused_ordering(962) 00:16:29.489 fused_ordering(963) 00:16:29.489 fused_ordering(964) 00:16:29.489 fused_ordering(965) 00:16:29.489 fused_ordering(966) 00:16:29.489 fused_ordering(967) 00:16:29.489 fused_ordering(968) 00:16:29.489 fused_ordering(969) 00:16:29.489 fused_ordering(970) 00:16:29.489 fused_ordering(971) 00:16:29.489 fused_ordering(972) 00:16:29.489 fused_ordering(973) 00:16:29.489 fused_ordering(974) 00:16:29.489 fused_ordering(975) 00:16:29.489 fused_ordering(976) 00:16:29.489 fused_ordering(977) 00:16:29.489 fused_ordering(978) 00:16:29.489 fused_ordering(979) 00:16:29.489 fused_ordering(980) 00:16:29.489 fused_ordering(981) 00:16:29.489 fused_ordering(982) 00:16:29.489 fused_ordering(983) 00:16:29.489 fused_ordering(984) 00:16:29.489 fused_ordering(985) 00:16:29.489 fused_ordering(986) 00:16:29.489 fused_ordering(987) 00:16:29.489 fused_ordering(988) 00:16:29.489 fused_ordering(989) 00:16:29.489 fused_ordering(990) 00:16:29.489 fused_ordering(991) 00:16:29.489 fused_ordering(992) 00:16:29.489 fused_ordering(993) 00:16:29.489 fused_ordering(994) 00:16:29.489 fused_ordering(995) 00:16:29.489 fused_ordering(996) 00:16:29.489 fused_ordering(997) 00:16:29.489 fused_ordering(998) 00:16:29.489 fused_ordering(999) 00:16:29.489 fused_ordering(1000) 00:16:29.489 fused_ordering(1001) 00:16:29.489 fused_ordering(1002) 00:16:29.489 fused_ordering(1003) 00:16:29.489 fused_ordering(1004) 00:16:29.489 fused_ordering(1005) 00:16:29.489 fused_ordering(1006) 00:16:29.489 fused_ordering(1007) 00:16:29.489 fused_ordering(1008) 00:16:29.489 fused_ordering(1009) 00:16:29.489 fused_ordering(1010) 00:16:29.489 fused_ordering(1011) 00:16:29.489 fused_ordering(1012) 00:16:29.489 fused_ordering(1013) 00:16:29.489 fused_ordering(1014) 00:16:29.489 fused_ordering(1015) 00:16:29.489 fused_ordering(1016) 00:16:29.489 fused_ordering(1017) 00:16:29.490 fused_ordering(1018) 00:16:29.490 fused_ordering(1019) 00:16:29.490 fused_ordering(1020) 00:16:29.490 fused_ordering(1021) 00:16:29.490 fused_ordering(1022) 00:16:29.490 fused_ordering(1023) 00:16:29.490 17:25:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:29.490 17:25:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:29.490 17:25:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:29.490 17:25:49 -- nvmf/common.sh@116 -- # sync 00:16:29.490 17:25:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:29.490 17:25:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:29.490 17:25:49 -- nvmf/common.sh@119 -- # set +e 00:16:29.490 17:25:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:29.490 17:25:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:29.490 rmmod nvme_rdma 00:16:29.490 rmmod nvme_fabrics 00:16:29.490 17:25:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:29.490 17:25:49 -- nvmf/common.sh@123 -- # set -e 00:16:29.490 17:25:49 -- nvmf/common.sh@124 -- # return 0 00:16:29.490 17:25:49 -- nvmf/common.sh@477 -- # '[' -n 2667939 ']' 00:16:29.490 17:25:49 -- nvmf/common.sh@478 -- # killprocess 2667939 00:16:29.490 17:25:49 -- common/autotest_common.sh@936 -- # '[' -z 2667939 ']' 00:16:29.490 17:25:49 -- common/autotest_common.sh@940 -- # kill -0 2667939 00:16:29.490 17:25:49 -- common/autotest_common.sh@941 -- # uname 00:16:29.490 17:25:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.490 17:25:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2667939 00:16:29.490 17:25:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.490 17:25:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.490 17:25:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2667939' 00:16:29.490 killing process with pid 2667939 00:16:29.490 17:25:49 -- common/autotest_common.sh@955 -- # kill 2667939 00:16:29.490 17:25:49 -- common/autotest_common.sh@960 -- # wait 2667939 00:16:29.750 17:25:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.750 17:25:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:29.750 00:16:29.750 real 0m9.328s 00:16:29.750 user 0m4.824s 00:16:29.750 sys 0m5.876s 00:16:29.750 17:25:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:29.750 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 ************************************ 00:16:29.750 END TEST nvmf_fused_ordering 00:16:29.750 ************************************ 00:16:29.750 17:25:49 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:29.750 17:25:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:29.750 17:25:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:29.750 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 ************************************ 00:16:29.750 START TEST nvmf_delete_subsystem 00:16:29.750 ************************************ 00:16:29.750 17:25:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:29.750 * Looking for test storage... 00:16:29.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:29.750 17:25:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:29.750 17:25:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:29.750 17:25:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:30.012 17:25:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:30.012 17:25:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:30.012 17:25:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:30.012 17:25:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:30.012 17:25:49 -- scripts/common.sh@335 -- # IFS=.-: 00:16:30.012 17:25:49 -- scripts/common.sh@335 -- # read -ra ver1 00:16:30.012 17:25:49 -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.012 17:25:49 -- scripts/common.sh@336 -- # read -ra ver2 00:16:30.012 17:25:49 -- scripts/common.sh@337 -- # local 'op=<' 00:16:30.012 17:25:49 -- scripts/common.sh@339 -- # ver1_l=2 00:16:30.012 17:25:49 -- scripts/common.sh@340 -- # ver2_l=1 00:16:30.012 17:25:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:30.012 17:25:49 -- scripts/common.sh@343 -- # case "$op" in 00:16:30.012 17:25:49 -- scripts/common.sh@344 -- # : 1 00:16:30.012 17:25:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:30.012 17:25:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.012 17:25:49 -- scripts/common.sh@364 -- # decimal 1 00:16:30.012 17:25:49 -- scripts/common.sh@352 -- # local d=1 00:16:30.012 17:25:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.012 17:25:49 -- scripts/common.sh@354 -- # echo 1 00:16:30.012 17:25:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:30.012 17:25:49 -- scripts/common.sh@365 -- # decimal 2 00:16:30.013 17:25:49 -- scripts/common.sh@352 -- # local d=2 00:16:30.013 17:25:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.013 17:25:49 -- scripts/common.sh@354 -- # echo 2 00:16:30.013 17:25:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:30.013 17:25:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:30.013 17:25:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:30.013 17:25:49 -- scripts/common.sh@367 -- # return 0 00:16:30.013 17:25:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.013 17:25:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.013 --rc genhtml_branch_coverage=1 00:16:30.013 --rc genhtml_function_coverage=1 00:16:30.013 --rc genhtml_legend=1 00:16:30.013 --rc geninfo_all_blocks=1 00:16:30.013 --rc geninfo_unexecuted_blocks=1 00:16:30.013 00:16:30.013 ' 00:16:30.013 17:25:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.013 --rc genhtml_branch_coverage=1 00:16:30.013 --rc genhtml_function_coverage=1 00:16:30.013 --rc genhtml_legend=1 00:16:30.013 --rc geninfo_all_blocks=1 00:16:30.013 --rc geninfo_unexecuted_blocks=1 00:16:30.013 00:16:30.013 ' 00:16:30.013 17:25:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.013 --rc genhtml_branch_coverage=1 00:16:30.013 --rc genhtml_function_coverage=1 00:16:30.013 --rc genhtml_legend=1 00:16:30.013 --rc geninfo_all_blocks=1 00:16:30.013 --rc geninfo_unexecuted_blocks=1 00:16:30.013 00:16:30.013 ' 00:16:30.013 17:25:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.013 --rc genhtml_branch_coverage=1 00:16:30.013 --rc genhtml_function_coverage=1 00:16:30.013 --rc genhtml_legend=1 00:16:30.013 --rc geninfo_all_blocks=1 00:16:30.013 --rc geninfo_unexecuted_blocks=1 00:16:30.013 00:16:30.013 ' 00:16:30.013 17:25:49 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.013 17:25:49 -- nvmf/common.sh@7 -- # uname -s 00:16:30.013 17:25:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.013 17:25:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.013 17:25:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.013 17:25:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.013 17:25:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.013 17:25:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.013 17:25:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.013 17:25:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.013 17:25:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.013 17:25:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.013 17:25:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:30.013 17:25:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:30.013 17:25:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.013 17:25:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.013 17:25:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.013 17:25:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:30.013 17:25:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.013 17:25:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.013 17:25:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.013 17:25:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.013 17:25:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.013 17:25:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.013 17:25:49 -- paths/export.sh@5 -- # export PATH 00:16:30.013 17:25:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.013 17:25:49 -- nvmf/common.sh@46 -- # : 0 00:16:30.013 17:25:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.013 17:25:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.013 17:25:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.013 17:25:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.013 17:25:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.013 17:25:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.013 17:25:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.013 17:25:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.013 17:25:49 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:30.013 17:25:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:30.013 17:25:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.013 17:25:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.013 17:25:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.013 17:25:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.013 17:25:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.013 17:25:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.013 17:25:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.013 17:25:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:30.013 17:25:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:30.013 17:25:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:30.013 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:36.587 17:25:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:36.587 17:25:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:36.587 17:25:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:36.587 17:25:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:36.587 17:25:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:36.587 17:25:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:36.587 17:25:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:36.587 17:25:56 -- nvmf/common.sh@294 -- # net_devs=() 00:16:36.587 17:25:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:36.587 17:25:56 -- nvmf/common.sh@295 -- # e810=() 00:16:36.587 17:25:56 -- nvmf/common.sh@295 -- # local -ga e810 00:16:36.587 17:25:56 -- nvmf/common.sh@296 -- # x722=() 00:16:36.587 17:25:56 -- nvmf/common.sh@296 -- # local -ga x722 00:16:36.587 17:25:56 -- nvmf/common.sh@297 -- # mlx=() 00:16:36.587 17:25:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:36.587 17:25:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.587 17:25:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:36.587 17:25:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:36.587 17:25:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:36.587 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:36.587 17:25:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:36.587 17:25:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:36.587 17:25:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:36.587 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:36.587 17:25:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:36.587 17:25:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:36.587 17:25:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:36.587 17:25:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.587 17:25:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:36.587 17:25:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.587 17:25:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:36.587 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:36.587 17:25:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:36.587 17:25:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.587 17:25:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:36.587 17:25:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.587 17:25:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:36.587 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:36.587 17:25:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.587 17:25:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:36.587 17:25:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:36.587 17:25:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:36.587 17:25:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:36.587 17:25:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:36.587 17:25:56 -- nvmf/common.sh@57 -- # uname 00:16:36.587 17:25:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:36.587 17:25:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:36.587 17:25:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:36.587 17:25:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:36.587 17:25:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:36.587 17:25:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:36.587 17:25:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:36.587 17:25:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:36.847 17:25:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:36.847 17:25:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:36.847 17:25:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:36.847 17:25:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:36.847 17:25:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:36.847 17:25:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:36.847 17:25:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:36.847 17:25:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:36.847 17:25:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:36.847 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.847 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:36.847 17:25:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:36.847 17:25:56 -- nvmf/common.sh@104 -- # continue 2 00:16:36.847 17:25:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:36.847 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.847 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:36.847 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.847 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:36.847 17:25:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:36.847 17:25:56 -- nvmf/common.sh@104 -- # continue 2 00:16:36.847 17:25:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:36.847 17:25:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:36.847 17:25:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:36.847 17:25:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:36.847 17:25:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:36.847 17:25:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:36.847 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:36.847 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:36.847 altname enp217s0f0np0 00:16:36.847 altname ens818f0np0 00:16:36.847 inet 192.168.100.8/24 scope global mlx_0_0 00:16:36.847 valid_lft forever preferred_lft forever 00:16:36.847 17:25:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:36.847 17:25:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:36.847 17:25:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:36.847 17:25:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:36.847 17:25:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:36.847 17:25:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:36.847 17:25:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:36.847 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:36.847 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:36.847 altname enp217s0f1np1 00:16:36.847 altname ens818f1np1 00:16:36.847 inet 192.168.100.9/24 scope global mlx_0_1 00:16:36.847 valid_lft forever preferred_lft forever 00:16:36.847 17:25:56 -- nvmf/common.sh@410 -- # return 0 00:16:36.847 17:25:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:36.848 17:25:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:36.848 17:25:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:36.848 17:25:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:36.848 17:25:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:36.848 17:25:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:36.848 17:25:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:36.848 17:25:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:36.848 17:25:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:36.848 17:25:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:36.848 17:25:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:36.848 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.848 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:36.848 17:25:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:36.848 17:25:56 -- nvmf/common.sh@104 -- # continue 2 00:16:36.848 17:25:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:36.848 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.848 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:36.848 17:25:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:36.848 17:25:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:36.848 17:25:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:36.848 17:25:56 -- nvmf/common.sh@104 -- # continue 2 00:16:36.848 17:25:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:36.848 17:25:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:36.848 17:25:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:36.848 17:25:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:36.848 17:25:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:36.848 17:25:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:36.848 17:25:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:36.848 17:25:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:36.848 192.168.100.9' 00:16:36.848 17:25:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:36.848 192.168.100.9' 00:16:36.848 17:25:56 -- nvmf/common.sh@445 -- # head -n 1 00:16:36.848 17:25:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:36.848 17:25:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:36.848 192.168.100.9' 00:16:36.848 17:25:56 -- nvmf/common.sh@446 -- # tail -n +2 00:16:36.848 17:25:56 -- nvmf/common.sh@446 -- # head -n 1 00:16:36.848 17:25:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:36.848 17:25:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:36.848 17:25:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:36.848 17:25:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:36.848 17:25:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:36.848 17:25:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:36.848 17:25:56 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:36.848 17:25:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:36.848 17:25:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.848 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 17:25:56 -- nvmf/common.sh@469 -- # nvmfpid=2671671 00:16:36.848 17:25:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:36.848 17:25:56 -- nvmf/common.sh@470 -- # waitforlisten 2671671 00:16:36.848 17:25:56 -- common/autotest_common.sh@829 -- # '[' -z 2671671 ']' 00:16:36.848 17:25:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.848 17:25:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.848 17:25:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.848 17:25:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.848 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 [2024-11-09 17:25:56.602003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:36.848 [2024-11-09 17:25:56.602071] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.108 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.108 [2024-11-09 17:25:56.671996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:37.108 [2024-11-09 17:25:56.739653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.108 [2024-11-09 17:25:56.739783] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.108 [2024-11-09 17:25:56.739793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.108 [2024-11-09 17:25:56.739802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.108 [2024-11-09 17:25:56.739858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.108 [2024-11-09 17:25:56.739861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.676 17:25:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.676 17:25:57 -- common/autotest_common.sh@862 -- # return 0 00:16:37.676 17:25:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:37.676 17:25:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.676 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 17:25:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 [2024-11-09 17:25:57.489048] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17ada60/0x17b1f50) succeed. 00:16:37.936 [2024-11-09 17:25:57.498086] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17aef60/0x17f35f0) succeed. 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 [2024-11-09 17:25:57.579132] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 NULL1 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 Delay0 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.936 17:25:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.936 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.936 17:25:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@28 -- # perf_pid=2671931 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:37.936 17:25:57 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:37.936 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.936 [2024-11-09 17:25:57.686055] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:39.849 17:25:59 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.849 17:25:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.849 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 NVMe io qpair process completion error 00:16:41.224 17:26:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.224 17:26:00 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:41.224 17:26:00 -- target/delete_subsystem.sh@35 -- # kill -0 2671931 00:16:41.224 17:26:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:41.791 17:26:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:41.791 17:26:01 -- target/delete_subsystem.sh@35 -- # kill -0 2671931 00:16:41.791 17:26:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Write completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.051 Read completed with error (sct=0, sc=8) 00:16:42.051 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 starting I/O failed: -6 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Read completed with error (sct=0, sc=8) 00:16:42.052 Write completed with error (sct=0, sc=8) 00:16:42.052 17:26:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:42.052 17:26:01 -- target/delete_subsystem.sh@35 -- # kill -0 2671931 00:16:42.052 17:26:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:42.052 [2024-11-09 17:26:01.784048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:42.052 [2024-11-09 17:26:01.784100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:42.052 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:42.052 Initializing NVMe Controllers 00:16:42.052 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.052 Controller IO queue size 128, less than required. 00:16:42.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.052 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:42.052 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:42.052 Initialization complete. Launching workers. 00:16:42.052 ======================================================== 00:16:42.052 Latency(us) 00:16:42.052 Device Information : IOPS MiB/s Average min max 00:16:42.052 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.38 0.04 1596574.09 1000114.41 2985889.64 00:16:42.052 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.38 0.04 1595662.46 1000235.54 2979520.92 00:16:42.052 ======================================================== 00:16:42.052 Total : 160.76 0.08 1596118.28 1000114.41 2985889.64 00:16:42.052 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@35 -- # kill -0 2671931 00:16:42.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2671931) - No such process 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@45 -- # NOT wait 2671931 00:16:42.621 17:26:02 -- common/autotest_common.sh@650 -- # local es=0 00:16:42.621 17:26:02 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2671931 00:16:42.621 17:26:02 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:42.621 17:26:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:42.621 17:26:02 -- common/autotest_common.sh@642 -- # type -t wait 00:16:42.621 17:26:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:42.621 17:26:02 -- common/autotest_common.sh@653 -- # wait 2671931 00:16:42.621 17:26:02 -- common/autotest_common.sh@653 -- # es=1 00:16:42.621 17:26:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:42.621 17:26:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:42.621 17:26:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:42.621 17:26:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.621 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:16:42.621 17:26:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:42.621 17:26:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.621 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:16:42.621 [2024-11-09 17:26:02.299893] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:42.621 17:26:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.621 17:26:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.621 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:16:42.621 17:26:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@54 -- # perf_pid=2672731 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:42.621 17:26:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:42.621 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.881 [2024-11-09 17:26:02.390762] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:43.140 17:26:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:43.140 17:26:02 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:43.140 17:26:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:43.711 17:26:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:43.711 17:26:03 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:43.711 17:26:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:44.279 17:26:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:44.279 17:26:03 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:44.279 17:26:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:44.847 17:26:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:44.847 17:26:04 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:44.847 17:26:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:45.106 17:26:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:45.106 17:26:04 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:45.106 17:26:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:45.675 17:26:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:45.675 17:26:05 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:45.675 17:26:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:46.244 17:26:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:46.244 17:26:05 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:46.244 17:26:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:46.812 17:26:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:46.812 17:26:06 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:46.812 17:26:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:47.381 17:26:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:47.381 17:26:06 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:47.381 17:26:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:47.640 17:26:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:47.640 17:26:07 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:47.640 17:26:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:48.208 17:26:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:48.208 17:26:07 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:48.209 17:26:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:48.777 17:26:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:48.777 17:26:08 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:48.777 17:26:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.346 17:26:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.346 17:26:08 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:49.346 17:26:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.914 17:26:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.914 17:26:09 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:49.914 17:26:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.915 Initializing NVMe Controllers 00:16:49.915 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.915 Controller IO queue size 128, less than required. 00:16:49.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.915 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:49.915 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:49.915 Initialization complete. Launching workers. 00:16:49.915 ======================================================== 00:16:49.915 Latency(us) 00:16:49.915 Device Information : IOPS MiB/s Average min max 00:16:49.915 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001414.81 1000054.84 1003777.81 00:16:49.915 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002616.03 1000113.32 1006144.48 00:16:49.915 ======================================================== 00:16:49.915 Total : 256.00 0.12 1002015.42 1000054.84 1006144.48 00:16:49.915 00:16:50.174 17:26:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:50.174 17:26:09 -- target/delete_subsystem.sh@57 -- # kill -0 2672731 00:16:50.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2672731) - No such process 00:16:50.174 17:26:09 -- target/delete_subsystem.sh@67 -- # wait 2672731 00:16:50.174 17:26:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:50.174 17:26:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:50.174 17:26:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:50.174 17:26:09 -- nvmf/common.sh@116 -- # sync 00:16:50.174 17:26:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:50.174 17:26:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:50.174 17:26:09 -- nvmf/common.sh@119 -- # set +e 00:16:50.174 17:26:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:50.174 17:26:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:50.174 rmmod nvme_rdma 00:16:50.174 rmmod nvme_fabrics 00:16:50.174 17:26:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:50.174 17:26:09 -- nvmf/common.sh@123 -- # set -e 00:16:50.174 17:26:09 -- nvmf/common.sh@124 -- # return 0 00:16:50.174 17:26:09 -- nvmf/common.sh@477 -- # '[' -n 2671671 ']' 00:16:50.174 17:26:09 -- nvmf/common.sh@478 -- # killprocess 2671671 00:16:50.174 17:26:09 -- common/autotest_common.sh@936 -- # '[' -z 2671671 ']' 00:16:50.174 17:26:09 -- common/autotest_common.sh@940 -- # kill -0 2671671 00:16:50.435 17:26:09 -- common/autotest_common.sh@941 -- # uname 00:16:50.435 17:26:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.435 17:26:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2671671 00:16:50.435 17:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:50.435 17:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:50.435 17:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2671671' 00:16:50.435 killing process with pid 2671671 00:16:50.435 17:26:10 -- common/autotest_common.sh@955 -- # kill 2671671 00:16:50.435 17:26:10 -- common/autotest_common.sh@960 -- # wait 2671671 00:16:50.695 17:26:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:50.695 17:26:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:50.696 00:16:50.696 real 0m20.854s 00:16:50.696 user 0m50.365s 00:16:50.696 sys 0m6.498s 00:16:50.696 17:26:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.696 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:16:50.696 ************************************ 00:16:50.696 END TEST nvmf_delete_subsystem 00:16:50.696 ************************************ 00:16:50.696 17:26:10 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:50.696 17:26:10 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:50.696 17:26:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.696 17:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.696 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:16:50.696 ************************************ 00:16:50.696 START TEST nvmf_nvme_cli 00:16:50.696 ************************************ 00:16:50.696 17:26:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:50.696 * Looking for test storage... 00:16:50.696 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:50.696 17:26:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:50.696 17:26:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:50.696 17:26:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:50.955 17:26:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:50.955 17:26:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:50.955 17:26:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:50.955 17:26:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:50.955 17:26:10 -- scripts/common.sh@335 -- # IFS=.-: 00:16:50.955 17:26:10 -- scripts/common.sh@335 -- # read -ra ver1 00:16:50.955 17:26:10 -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.955 17:26:10 -- scripts/common.sh@336 -- # read -ra ver2 00:16:50.955 17:26:10 -- scripts/common.sh@337 -- # local 'op=<' 00:16:50.955 17:26:10 -- scripts/common.sh@339 -- # ver1_l=2 00:16:50.955 17:26:10 -- scripts/common.sh@340 -- # ver2_l=1 00:16:50.955 17:26:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:50.955 17:26:10 -- scripts/common.sh@343 -- # case "$op" in 00:16:50.955 17:26:10 -- scripts/common.sh@344 -- # : 1 00:16:50.955 17:26:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:50.955 17:26:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.955 17:26:10 -- scripts/common.sh@364 -- # decimal 1 00:16:50.955 17:26:10 -- scripts/common.sh@352 -- # local d=1 00:16:50.955 17:26:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.955 17:26:10 -- scripts/common.sh@354 -- # echo 1 00:16:50.955 17:26:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:50.955 17:26:10 -- scripts/common.sh@365 -- # decimal 2 00:16:50.955 17:26:10 -- scripts/common.sh@352 -- # local d=2 00:16:50.955 17:26:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.955 17:26:10 -- scripts/common.sh@354 -- # echo 2 00:16:50.955 17:26:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:50.955 17:26:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:50.955 17:26:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:50.955 17:26:10 -- scripts/common.sh@367 -- # return 0 00:16:50.955 17:26:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.955 17:26:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.955 --rc genhtml_branch_coverage=1 00:16:50.955 --rc genhtml_function_coverage=1 00:16:50.955 --rc genhtml_legend=1 00:16:50.955 --rc geninfo_all_blocks=1 00:16:50.955 --rc geninfo_unexecuted_blocks=1 00:16:50.955 00:16:50.955 ' 00:16:50.955 17:26:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.955 --rc genhtml_branch_coverage=1 00:16:50.955 --rc genhtml_function_coverage=1 00:16:50.955 --rc genhtml_legend=1 00:16:50.955 --rc geninfo_all_blocks=1 00:16:50.955 --rc geninfo_unexecuted_blocks=1 00:16:50.955 00:16:50.955 ' 00:16:50.955 17:26:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.955 --rc genhtml_branch_coverage=1 00:16:50.955 --rc genhtml_function_coverage=1 00:16:50.955 --rc genhtml_legend=1 00:16:50.955 --rc geninfo_all_blocks=1 00:16:50.955 --rc geninfo_unexecuted_blocks=1 00:16:50.955 00:16:50.955 ' 00:16:50.955 17:26:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.955 --rc genhtml_branch_coverage=1 00:16:50.955 --rc genhtml_function_coverage=1 00:16:50.955 --rc genhtml_legend=1 00:16:50.955 --rc geninfo_all_blocks=1 00:16:50.955 --rc geninfo_unexecuted_blocks=1 00:16:50.955 00:16:50.955 ' 00:16:50.955 17:26:10 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.955 17:26:10 -- nvmf/common.sh@7 -- # uname -s 00:16:50.955 17:26:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.955 17:26:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.955 17:26:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.955 17:26:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.955 17:26:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.955 17:26:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.955 17:26:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.955 17:26:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.955 17:26:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.955 17:26:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.956 17:26:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:50.956 17:26:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:50.956 17:26:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.956 17:26:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.956 17:26:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.956 17:26:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:50.956 17:26:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.956 17:26:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.956 17:26:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.956 17:26:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.956 17:26:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.956 17:26:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.956 17:26:10 -- paths/export.sh@5 -- # export PATH 00:16:50.956 17:26:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.956 17:26:10 -- nvmf/common.sh@46 -- # : 0 00:16:50.956 17:26:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.956 17:26:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.956 17:26:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.956 17:26:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.956 17:26:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.956 17:26:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.956 17:26:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.956 17:26:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.956 17:26:10 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.956 17:26:10 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.956 17:26:10 -- target/nvme_cli.sh@14 -- # devs=() 00:16:50.956 17:26:10 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:50.956 17:26:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:50.956 17:26:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.956 17:26:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.956 17:26:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.956 17:26:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.956 17:26:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.956 17:26:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.956 17:26:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.956 17:26:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:50.956 17:26:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:50.956 17:26:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:50.956 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 17:26:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:57.530 17:26:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:57.530 17:26:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:57.530 17:26:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:57.530 17:26:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:57.530 17:26:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:57.530 17:26:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:57.530 17:26:17 -- nvmf/common.sh@294 -- # net_devs=() 00:16:57.530 17:26:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:57.530 17:26:17 -- nvmf/common.sh@295 -- # e810=() 00:16:57.530 17:26:17 -- nvmf/common.sh@295 -- # local -ga e810 00:16:57.530 17:26:17 -- nvmf/common.sh@296 -- # x722=() 00:16:57.530 17:26:17 -- nvmf/common.sh@296 -- # local -ga x722 00:16:57.530 17:26:17 -- nvmf/common.sh@297 -- # mlx=() 00:16:57.530 17:26:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:57.530 17:26:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.530 17:26:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:57.530 17:26:17 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:57.530 17:26:17 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:57.530 17:26:17 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:57.531 17:26:17 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:57.531 17:26:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:57.531 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:57.531 17:26:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.531 17:26:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:57.531 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:57.531 17:26:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.531 17:26:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.531 17:26:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.531 17:26:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:57.531 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:57.531 17:26:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.531 17:26:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.531 17:26:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.531 17:26:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:57.531 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:57.531 17:26:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.531 17:26:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:57.531 17:26:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:57.531 17:26:17 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:57.531 17:26:17 -- nvmf/common.sh@57 -- # uname 00:16:57.531 17:26:17 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:57.531 17:26:17 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:57.531 17:26:17 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:57.531 17:26:17 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:57.531 17:26:17 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:57.531 17:26:17 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:57.531 17:26:17 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:57.531 17:26:17 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:57.531 17:26:17 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:57.531 17:26:17 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:57.531 17:26:17 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:57.531 17:26:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.531 17:26:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:57.531 17:26:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:57.531 17:26:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.531 17:26:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:57.531 17:26:17 -- nvmf/common.sh@104 -- # continue 2 00:16:57.531 17:26:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.531 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:57.531 17:26:17 -- nvmf/common.sh@104 -- # continue 2 00:16:57.531 17:26:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:57.531 17:26:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:57.531 17:26:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.531 17:26:17 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:57.531 17:26:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:57.531 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.531 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:57.531 altname enp217s0f0np0 00:16:57.531 altname ens818f0np0 00:16:57.531 inet 192.168.100.8/24 scope global mlx_0_0 00:16:57.531 valid_lft forever preferred_lft forever 00:16:57.531 17:26:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:57.531 17:26:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:57.531 17:26:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.531 17:26:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.531 17:26:17 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:57.531 17:26:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:57.531 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.531 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:57.531 altname enp217s0f1np1 00:16:57.531 altname ens818f1np1 00:16:57.531 inet 192.168.100.9/24 scope global mlx_0_1 00:16:57.531 valid_lft forever preferred_lft forever 00:16:57.531 17:26:17 -- nvmf/common.sh@410 -- # return 0 00:16:57.531 17:26:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.531 17:26:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:57.531 17:26:17 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:57.531 17:26:17 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:57.531 17:26:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:57.531 17:26:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.531 17:26:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:57.531 17:26:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:57.531 17:26:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.531 17:26:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:57.531 17:26:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.532 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.532 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.532 17:26:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:57.532 17:26:17 -- nvmf/common.sh@104 -- # continue 2 00:16:57.532 17:26:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.532 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.532 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.532 17:26:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.532 17:26:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.532 17:26:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:57.532 17:26:17 -- nvmf/common.sh@104 -- # continue 2 00:16:57.532 17:26:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:57.532 17:26:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:57.532 17:26:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.532 17:26:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:57.532 17:26:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:57.532 17:26:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.532 17:26:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.532 17:26:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:57.532 192.168.100.9' 00:16:57.532 17:26:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:57.532 192.168.100.9' 00:16:57.532 17:26:17 -- nvmf/common.sh@445 -- # head -n 1 00:16:57.532 17:26:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:57.532 17:26:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:57.532 192.168.100.9' 00:16:57.532 17:26:17 -- nvmf/common.sh@446 -- # tail -n +2 00:16:57.532 17:26:17 -- nvmf/common.sh@446 -- # head -n 1 00:16:57.532 17:26:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:57.532 17:26:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:57.532 17:26:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:57.532 17:26:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:57.532 17:26:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:57.532 17:26:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:57.532 17:26:17 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:57.532 17:26:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.532 17:26:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.532 17:26:17 -- common/autotest_common.sh@10 -- # set +x 00:16:57.791 17:26:17 -- nvmf/common.sh@469 -- # nvmfpid=2677322 00:16:57.792 17:26:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.792 17:26:17 -- nvmf/common.sh@470 -- # waitforlisten 2677322 00:16:57.792 17:26:17 -- common/autotest_common.sh@829 -- # '[' -z 2677322 ']' 00:16:57.792 17:26:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.792 17:26:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.792 17:26:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.792 17:26:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.792 17:26:17 -- common/autotest_common.sh@10 -- # set +x 00:16:57.792 [2024-11-09 17:26:17.345263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.792 [2024-11-09 17:26:17.345306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.792 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.792 [2024-11-09 17:26:17.414188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.792 [2024-11-09 17:26:17.488794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.792 [2024-11-09 17:26:17.488897] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.792 [2024-11-09 17:26:17.488907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.792 [2024-11-09 17:26:17.488916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.792 [2024-11-09 17:26:17.488955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.792 [2024-11-09 17:26:17.489049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.792 [2024-11-09 17:26:17.489126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.792 [2024-11-09 17:26:17.489127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.730 17:26:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.730 17:26:18 -- common/autotest_common.sh@862 -- # return 0 00:16:58.730 17:26:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.730 17:26:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:26:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.730 17:26:18 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 [2024-11-09 17:26:18.248893] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16af090/0x16b3580) succeed. 00:16:58.730 [2024-11-09 17:26:18.258211] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16b0680/0x16f4c20) succeed. 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 Malloc0 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 Malloc1 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 [2024-11-09 17:26:18.456002] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:58.730 17:26:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.730 17:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:26:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.730 17:26:18 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:16:58.990 00:16:58.990 Discovery Log Number of Records 2, Generation counter 2 00:16:58.990 =====Discovery Log Entry 0====== 00:16:58.990 trtype: rdma 00:16:58.990 adrfam: ipv4 00:16:58.990 subtype: current discovery subsystem 00:16:58.990 treq: not required 00:16:58.990 portid: 0 00:16:58.990 trsvcid: 4420 00:16:58.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:58.990 traddr: 192.168.100.8 00:16:58.990 eflags: explicit discovery connections, duplicate discovery information 00:16:58.990 rdma_prtype: not specified 00:16:58.990 rdma_qptype: connected 00:16:58.990 rdma_cms: rdma-cm 00:16:58.990 rdma_pkey: 0x0000 00:16:58.990 =====Discovery Log Entry 1====== 00:16:58.990 trtype: rdma 00:16:58.990 adrfam: ipv4 00:16:58.990 subtype: nvme subsystem 00:16:58.990 treq: not required 00:16:58.990 portid: 0 00:16:58.990 trsvcid: 4420 00:16:58.990 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:58.990 traddr: 192.168.100.8 00:16:58.990 eflags: none 00:16:58.990 rdma_prtype: not specified 00:16:58.990 rdma_qptype: connected 00:16:58.990 rdma_cms: rdma-cm 00:16:58.990 rdma_pkey: 0x0000 00:16:58.990 17:26:18 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:58.990 17:26:18 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:58.990 17:26:18 -- nvmf/common.sh@510 -- # local dev _ 00:16:58.990 17:26:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:58.990 17:26:18 -- nvmf/common.sh@509 -- # nvme list 00:16:58.990 17:26:18 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:58.990 17:26:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:58.990 17:26:18 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:58.990 17:26:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:58.990 17:26:18 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:58.990 17:26:18 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:59.933 17:26:19 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:59.933 17:26:19 -- common/autotest_common.sh@1187 -- # local i=0 00:16:59.933 17:26:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.933 17:26:19 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:16:59.933 17:26:19 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:16:59.933 17:26:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:01.840 17:26:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:01.840 17:26:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:01.840 17:26:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:01.840 17:26:21 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:01.840 17:26:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:01.840 17:26:21 -- common/autotest_common.sh@1197 -- # return 0 00:17:01.840 17:26:21 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:01.840 17:26:21 -- nvmf/common.sh@510 -- # local dev _ 00:17:01.840 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:01.840 17:26:21 -- nvmf/common.sh@509 -- # nvme list 00:17:01.840 17:26:21 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:01.840 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:01.840 17:26:21 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:01.840 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:02.099 /dev/nvme0n2 ]] 00:17:02.099 17:26:21 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:02.099 17:26:21 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:02.099 17:26:21 -- nvmf/common.sh@510 -- # local dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@509 -- # nvme list 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:02.099 17:26:21 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:02.099 17:26:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:02.099 17:26:21 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:02.099 17:26:21 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.051 17:26:22 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.051 17:26:22 -- common/autotest_common.sh@1208 -- # local i=0 00:17:03.051 17:26:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:03.051 17:26:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.051 17:26:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:03.051 17:26:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.051 17:26:22 -- common/autotest_common.sh@1220 -- # return 0 00:17:03.051 17:26:22 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:03.051 17:26:22 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.051 17:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.051 17:26:22 -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 17:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.051 17:26:22 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:03.051 17:26:22 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:03.051 17:26:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:03.051 17:26:22 -- nvmf/common.sh@116 -- # sync 00:17:03.051 17:26:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:03.051 17:26:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:03.051 17:26:22 -- nvmf/common.sh@119 -- # set +e 00:17:03.051 17:26:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:03.051 17:26:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:03.051 rmmod nvme_rdma 00:17:03.051 rmmod nvme_fabrics 00:17:03.051 17:26:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:03.051 17:26:22 -- nvmf/common.sh@123 -- # set -e 00:17:03.051 17:26:22 -- nvmf/common.sh@124 -- # return 0 00:17:03.051 17:26:22 -- nvmf/common.sh@477 -- # '[' -n 2677322 ']' 00:17:03.051 17:26:22 -- nvmf/common.sh@478 -- # killprocess 2677322 00:17:03.051 17:26:22 -- common/autotest_common.sh@936 -- # '[' -z 2677322 ']' 00:17:03.051 17:26:22 -- common/autotest_common.sh@940 -- # kill -0 2677322 00:17:03.051 17:26:22 -- common/autotest_common.sh@941 -- # uname 00:17:03.051 17:26:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.051 17:26:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2677322 00:17:03.051 17:26:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.051 17:26:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.051 17:26:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2677322' 00:17:03.051 killing process with pid 2677322 00:17:03.051 17:26:22 -- common/autotest_common.sh@955 -- # kill 2677322 00:17:03.051 17:26:22 -- common/autotest_common.sh@960 -- # wait 2677322 00:17:03.621 17:26:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:03.621 17:26:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:03.621 00:17:03.621 real 0m12.784s 00:17:03.621 user 0m24.172s 00:17:03.621 sys 0m5.860s 00:17:03.621 17:26:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:03.621 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 ************************************ 00:17:03.621 END TEST nvmf_nvme_cli 00:17:03.621 ************************************ 00:17:03.621 17:26:23 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:03.621 17:26:23 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:03.621 17:26:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.621 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.621 ************************************ 00:17:03.621 START TEST nvmf_host_management 00:17:03.621 ************************************ 00:17:03.621 17:26:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:03.621 * Looking for test storage... 00:17:03.621 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:03.621 17:26:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.621 17:26:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.621 17:26:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:03.621 17:26:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:03.621 17:26:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:03.621 17:26:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:03.621 17:26:23 -- scripts/common.sh@335 -- # IFS=.-: 00:17:03.621 17:26:23 -- scripts/common.sh@335 -- # read -ra ver1 00:17:03.621 17:26:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.621 17:26:23 -- scripts/common.sh@336 -- # read -ra ver2 00:17:03.621 17:26:23 -- scripts/common.sh@337 -- # local 'op=<' 00:17:03.621 17:26:23 -- scripts/common.sh@339 -- # ver1_l=2 00:17:03.621 17:26:23 -- scripts/common.sh@340 -- # ver2_l=1 00:17:03.621 17:26:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:03.621 17:26:23 -- scripts/common.sh@343 -- # case "$op" in 00:17:03.621 17:26:23 -- scripts/common.sh@344 -- # : 1 00:17:03.621 17:26:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:03.621 17:26:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.621 17:26:23 -- scripts/common.sh@364 -- # decimal 1 00:17:03.621 17:26:23 -- scripts/common.sh@352 -- # local d=1 00:17:03.621 17:26:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.621 17:26:23 -- scripts/common.sh@354 -- # echo 1 00:17:03.621 17:26:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:03.621 17:26:23 -- scripts/common.sh@365 -- # decimal 2 00:17:03.621 17:26:23 -- scripts/common.sh@352 -- # local d=2 00:17:03.621 17:26:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.621 17:26:23 -- scripts/common.sh@354 -- # echo 2 00:17:03.621 17:26:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:03.621 17:26:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:03.621 17:26:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:03.621 17:26:23 -- scripts/common.sh@367 -- # return 0 00:17:03.621 17:26:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:03.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.621 --rc genhtml_branch_coverage=1 00:17:03.621 --rc genhtml_function_coverage=1 00:17:03.621 --rc genhtml_legend=1 00:17:03.621 --rc geninfo_all_blocks=1 00:17:03.621 --rc geninfo_unexecuted_blocks=1 00:17:03.621 00:17:03.621 ' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:03.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.621 --rc genhtml_branch_coverage=1 00:17:03.621 --rc genhtml_function_coverage=1 00:17:03.621 --rc genhtml_legend=1 00:17:03.621 --rc geninfo_all_blocks=1 00:17:03.621 --rc geninfo_unexecuted_blocks=1 00:17:03.621 00:17:03.621 ' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:03.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.621 --rc genhtml_branch_coverage=1 00:17:03.621 --rc genhtml_function_coverage=1 00:17:03.621 --rc genhtml_legend=1 00:17:03.621 --rc geninfo_all_blocks=1 00:17:03.621 --rc geninfo_unexecuted_blocks=1 00:17:03.621 00:17:03.621 ' 00:17:03.621 17:26:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:03.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.621 --rc genhtml_branch_coverage=1 00:17:03.621 --rc genhtml_function_coverage=1 00:17:03.621 --rc genhtml_legend=1 00:17:03.621 --rc geninfo_all_blocks=1 00:17:03.621 --rc geninfo_unexecuted_blocks=1 00:17:03.621 00:17:03.621 ' 00:17:03.621 17:26:23 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.621 17:26:23 -- nvmf/common.sh@7 -- # uname -s 00:17:03.621 17:26:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.621 17:26:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.621 17:26:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.621 17:26:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.621 17:26:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.621 17:26:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.621 17:26:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.621 17:26:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.621 17:26:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.621 17:26:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.621 17:26:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:03.621 17:26:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:03.621 17:26:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.621 17:26:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.621 17:26:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.621 17:26:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:03.621 17:26:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.621 17:26:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.621 17:26:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.621 17:26:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.621 17:26:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.621 17:26:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.621 17:26:23 -- paths/export.sh@5 -- # export PATH 00:17:03.621 17:26:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.621 17:26:23 -- nvmf/common.sh@46 -- # : 0 00:17:03.621 17:26:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:03.621 17:26:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:03.621 17:26:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:03.621 17:26:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.622 17:26:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.622 17:26:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:03.622 17:26:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:03.622 17:26:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:03.622 17:26:23 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.622 17:26:23 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.622 17:26:23 -- target/host_management.sh@104 -- # nvmftestinit 00:17:03.622 17:26:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:03.622 17:26:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.622 17:26:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:03.622 17:26:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:03.622 17:26:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:03.622 17:26:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.622 17:26:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.622 17:26:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.622 17:26:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:03.622 17:26:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:03.622 17:26:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:03.622 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.344 17:26:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.344 17:26:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:10.344 17:26:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:10.344 17:26:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:10.344 17:26:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:10.344 17:26:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:10.344 17:26:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:10.344 17:26:30 -- nvmf/common.sh@294 -- # net_devs=() 00:17:10.344 17:26:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:10.344 17:26:30 -- nvmf/common.sh@295 -- # e810=() 00:17:10.344 17:26:30 -- nvmf/common.sh@295 -- # local -ga e810 00:17:10.344 17:26:30 -- nvmf/common.sh@296 -- # x722=() 00:17:10.344 17:26:30 -- nvmf/common.sh@296 -- # local -ga x722 00:17:10.344 17:26:30 -- nvmf/common.sh@297 -- # mlx=() 00:17:10.344 17:26:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:10.344 17:26:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.344 17:26:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:10.344 17:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.344 17:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:10.344 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:10.344 17:26:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.344 17:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.344 17:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:10.344 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:10.344 17:26:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.344 17:26:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:10.344 17:26:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.344 17:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.344 17:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.344 17:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.344 17:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:10.344 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:10.344 17:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.344 17:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.344 17:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.344 17:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.344 17:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:10.344 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:10.344 17:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.344 17:26:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:10.344 17:26:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:10.344 17:26:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:10.344 17:26:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:10.344 17:26:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:10.345 17:26:30 -- nvmf/common.sh@57 -- # uname 00:17:10.345 17:26:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:10.345 17:26:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:10.345 17:26:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:10.605 17:26:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:10.605 17:26:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:10.605 17:26:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:10.605 17:26:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:10.605 17:26:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:10.605 17:26:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:10.605 17:26:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:10.605 17:26:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:10.605 17:26:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:10.605 17:26:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:10.605 17:26:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:10.605 17:26:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:10.605 17:26:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:10.605 17:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.605 17:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.605 17:26:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:10.605 17:26:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.605 17:26:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:10.605 17:26:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:10.605 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:10.605 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:10.605 altname enp217s0f0np0 00:17:10.605 altname ens818f0np0 00:17:10.605 inet 192.168.100.8/24 scope global mlx_0_0 00:17:10.605 valid_lft forever preferred_lft forever 00:17:10.605 17:26:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:10.605 17:26:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.605 17:26:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:10.605 17:26:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:10.605 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:10.605 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:10.605 altname enp217s0f1np1 00:17:10.605 altname ens818f1np1 00:17:10.605 inet 192.168.100.9/24 scope global mlx_0_1 00:17:10.605 valid_lft forever preferred_lft forever 00:17:10.605 17:26:30 -- nvmf/common.sh@410 -- # return 0 00:17:10.605 17:26:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:10.605 17:26:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:10.605 17:26:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:10.605 17:26:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:10.605 17:26:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:10.605 17:26:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:10.605 17:26:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:10.605 17:26:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:10.605 17:26:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:10.605 17:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.605 17:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.605 17:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:10.605 17:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.605 17:26:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:10.605 17:26:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.605 17:26:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:10.605 17:26:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.605 17:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.605 17:26:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:10.605 192.168.100.9' 00:17:10.605 17:26:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:10.605 192.168.100.9' 00:17:10.605 17:26:30 -- nvmf/common.sh@445 -- # head -n 1 00:17:10.605 17:26:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:10.605 17:26:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:10.605 192.168.100.9' 00:17:10.605 17:26:30 -- nvmf/common.sh@446 -- # tail -n +2 00:17:10.605 17:26:30 -- nvmf/common.sh@446 -- # head -n 1 00:17:10.605 17:26:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:10.605 17:26:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:10.605 17:26:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:10.605 17:26:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:10.605 17:26:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:10.605 17:26:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:10.605 17:26:30 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:10.605 17:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:10.605 17:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.605 17:26:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.605 ************************************ 00:17:10.605 START TEST nvmf_host_management 00:17:10.605 ************************************ 00:17:10.605 17:26:30 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:10.605 17:26:30 -- target/host_management.sh@69 -- # starttarget 00:17:10.605 17:26:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:10.605 17:26:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.605 17:26:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.605 17:26:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.605 17:26:30 -- nvmf/common.sh@469 -- # nvmfpid=2681710 00:17:10.605 17:26:30 -- nvmf/common.sh@470 -- # waitforlisten 2681710 00:17:10.605 17:26:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:10.605 17:26:30 -- common/autotest_common.sh@829 -- # '[' -z 2681710 ']' 00:17:10.605 17:26:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.605 17:26:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.605 17:26:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.605 17:26:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.605 17:26:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.865 [2024-11-09 17:26:30.399742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.865 [2024-11-09 17:26:30.399789] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.865 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.865 [2024-11-09 17:26:30.470280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.865 [2024-11-09 17:26:30.544555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.865 [2024-11-09 17:26:30.544662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.865 [2024-11-09 17:26:30.544672] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.865 [2024-11-09 17:26:30.544681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.865 [2024-11-09 17:26:30.544778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.865 [2024-11-09 17:26:30.544862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.865 [2024-11-09 17:26:30.544971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.865 [2024-11-09 17:26:30.544972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:11.804 17:26:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.804 17:26:31 -- common/autotest_common.sh@862 -- # return 0 00:17:11.804 17:26:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.804 17:26:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 17:26:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.804 17:26:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:11.804 17:26:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 [2024-11-09 17:26:31.294958] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1304380/0x1308870) succeed. 00:17:11.804 [2024-11-09 17:26:31.304091] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1305970/0x1349f10) succeed. 00:17:11.804 17:26:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.804 17:26:31 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:11.804 17:26:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 17:26:31 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:11.804 17:26:31 -- target/host_management.sh@23 -- # cat 00:17:11.804 17:26:31 -- target/host_management.sh@30 -- # rpc_cmd 00:17:11.804 17:26:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 Malloc0 00:17:11.804 [2024-11-09 17:26:31.481321] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:11.804 17:26:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.804 17:26:31 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:11.804 17:26:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 17:26:31 -- target/host_management.sh@73 -- # perfpid=2681951 00:17:11.804 17:26:31 -- target/host_management.sh@74 -- # waitforlisten 2681951 /var/tmp/bdevperf.sock 00:17:11.804 17:26:31 -- common/autotest_common.sh@829 -- # '[' -z 2681951 ']' 00:17:11.804 17:26:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.804 17:26:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.804 17:26:31 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:11.804 17:26:31 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:11.804 17:26:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.804 17:26:31 -- nvmf/common.sh@520 -- # config=() 00:17:11.804 17:26:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.804 17:26:31 -- nvmf/common.sh@520 -- # local subsystem config 00:17:11.804 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 17:26:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:11.804 17:26:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:11.804 { 00:17:11.804 "params": { 00:17:11.804 "name": "Nvme$subsystem", 00:17:11.804 "trtype": "$TEST_TRANSPORT", 00:17:11.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.804 "adrfam": "ipv4", 00:17:11.804 "trsvcid": "$NVMF_PORT", 00:17:11.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.804 "hdgst": ${hdgst:-false}, 00:17:11.804 "ddgst": ${ddgst:-false} 00:17:11.804 }, 00:17:11.804 "method": "bdev_nvme_attach_controller" 00:17:11.804 } 00:17:11.804 EOF 00:17:11.804 )") 00:17:11.804 17:26:31 -- nvmf/common.sh@542 -- # cat 00:17:11.804 17:26:31 -- nvmf/common.sh@544 -- # jq . 00:17:11.804 17:26:31 -- nvmf/common.sh@545 -- # IFS=, 00:17:11.804 17:26:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:11.804 "params": { 00:17:11.804 "name": "Nvme0", 00:17:11.804 "trtype": "rdma", 00:17:11.804 "traddr": "192.168.100.8", 00:17:11.804 "adrfam": "ipv4", 00:17:11.804 "trsvcid": "4420", 00:17:11.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:11.804 "hdgst": false, 00:17:11.804 "ddgst": false 00:17:11.804 }, 00:17:11.804 "method": "bdev_nvme_attach_controller" 00:17:11.804 }' 00:17:12.064 [2024-11-09 17:26:31.582568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:12.064 [2024-11-09 17:26:31.582619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681951 ] 00:17:12.064 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.064 [2024-11-09 17:26:31.653677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.065 [2024-11-09 17:26:31.721663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.324 Running I/O for 10 seconds... 00:17:12.893 17:26:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.893 17:26:32 -- common/autotest_common.sh@862 -- # return 0 00:17:12.893 17:26:32 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:12.893 17:26:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.893 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 17:26:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.893 17:26:32 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.893 17:26:32 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:12.893 17:26:32 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:12.893 17:26:32 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:12.893 17:26:32 -- target/host_management.sh@52 -- # local ret=1 00:17:12.893 17:26:32 -- target/host_management.sh@53 -- # local i 00:17:12.893 17:26:32 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:12.893 17:26:32 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:12.893 17:26:32 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:12.893 17:26:32 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:12.893 17:26:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.893 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 17:26:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.893 17:26:32 -- target/host_management.sh@55 -- # read_io_count=3019 00:17:12.893 17:26:32 -- target/host_management.sh@58 -- # '[' 3019 -ge 100 ']' 00:17:12.893 17:26:32 -- target/host_management.sh@59 -- # ret=0 00:17:12.893 17:26:32 -- target/host_management.sh@60 -- # break 00:17:12.893 17:26:32 -- target/host_management.sh@64 -- # return 0 00:17:12.893 17:26:32 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:12.893 17:26:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.893 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 17:26:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.893 17:26:32 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:12.893 17:26:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.893 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 [2024-11-09 17:26:32.479948] rdma.c: 918:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 1 00:17:12.893 17:26:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.893 17:26:32 -- target/host_management.sh@87 -- # sleep 1 00:17:13.830 [2024-11-09 17:26:33.480991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:13.831 [2024-11-09 17:26:33.481093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:13.831 [2024-11-09 17:26:33.481137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:13.831 [2024-11-09 17:26:33.481392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:17:13.831 [2024-11-09 17:26:33.481451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:13.831 [2024-11-09 17:26:33.481633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:17:13.831 [2024-11-09 17:26:33.481671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:17:13.831 [2024-11-09 17:26:33.481692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:13.831 [2024-11-09 17:26:33.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.831 [2024-11-09 17:26:33.481723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.481732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:13.832 [2024-11-09 17:26:33.481751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.481770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.481790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.481814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.481834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:13.832 [2024-11-09 17:26:33.481853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.481873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.481895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.481914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.481934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.481953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.481973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:13.832 [2024-11-09 17:26:33.481992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:17:13.832 [2024-11-09 17:26:33.482011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.482032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.482051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.482070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.482090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.482109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.482148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.482167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:13.832 [2024-11-09 17:26:33.482186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:13.832 [2024-11-09 17:26:33.482207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.482226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:13.832 [2024-11-09 17:26:33.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.482266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.482276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:13.832 [2024-11-09 17:26:33.482285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2aeac000 sqhd:5310 p:0 m:0 dnr:0 00:17:13.832 [2024-11-09 17:26:33.484202] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:13.832 [2024-11-09 17:26:33.485077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.832 task offset: 17536 on job bdev=Nvme0n1 fails 00:17:13.832 00:17:13.832 Latency(us) 00:17:13.832 [2024-11-09T16:26:33.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.832 [2024-11-09T16:26:33.602Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.832 [2024-11-09T16:26:33.602Z] Job: Nvme0n1 ended in about 1.59 seconds with error 00:17:13.832 Verification LBA range: start 0x0 length 0x400 00:17:13.832 Nvme0n1 : 1.59 2038.14 127.38 40.30 0.00 30607.27 3486.52 1020054.73 00:17:13.832 [2024-11-09T16:26:33.602Z] =================================================================================================================== 00:17:13.832 [2024-11-09T16:26:33.602Z] Total : 2038.14 127.38 40.30 0.00 30607.27 3486.52 1020054.73 00:17:13.832 [2024-11-09 17:26:33.486742] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:13.832 17:26:33 -- target/host_management.sh@91 -- # kill -9 2681951 00:17:13.832 17:26:33 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:13.832 17:26:33 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:13.832 17:26:33 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:13.832 17:26:33 -- nvmf/common.sh@520 -- # config=() 00:17:13.832 17:26:33 -- nvmf/common.sh@520 -- # local subsystem config 00:17:13.832 17:26:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:13.832 17:26:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:13.832 { 00:17:13.832 "params": { 00:17:13.832 "name": "Nvme$subsystem", 00:17:13.832 "trtype": "$TEST_TRANSPORT", 00:17:13.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.832 "adrfam": "ipv4", 00:17:13.832 "trsvcid": "$NVMF_PORT", 00:17:13.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.833 "hdgst": ${hdgst:-false}, 00:17:13.833 "ddgst": ${ddgst:-false} 00:17:13.833 }, 00:17:13.833 "method": "bdev_nvme_attach_controller" 00:17:13.833 } 00:17:13.833 EOF 00:17:13.833 )") 00:17:13.833 17:26:33 -- nvmf/common.sh@542 -- # cat 00:17:13.833 17:26:33 -- nvmf/common.sh@544 -- # jq . 00:17:13.833 17:26:33 -- nvmf/common.sh@545 -- # IFS=, 00:17:13.833 17:26:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:13.833 "params": { 00:17:13.833 "name": "Nvme0", 00:17:13.833 "trtype": "rdma", 00:17:13.833 "traddr": "192.168.100.8", 00:17:13.833 "adrfam": "ipv4", 00:17:13.833 "trsvcid": "4420", 00:17:13.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:13.833 "hdgst": false, 00:17:13.833 "ddgst": false 00:17:13.833 }, 00:17:13.833 "method": "bdev_nvme_attach_controller" 00:17:13.833 }' 00:17:13.833 [2024-11-09 17:26:33.537728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.833 [2024-11-09 17:26:33.537782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682292 ] 00:17:13.833 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.092 [2024-11-09 17:26:33.607144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.092 [2024-11-09 17:26:33.675178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.092 Running I/O for 1 seconds... 00:17:15.472 00:17:15.472 Latency(us) 00:17:15.472 [2024-11-09T16:26:35.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.472 [2024-11-09T16:26:35.242Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:15.472 Verification LBA range: start 0x0 length 0x400 00:17:15.472 Nvme0n1 : 1.00 5625.87 351.62 0.00 0.00 11204.17 452.20 24641.54 00:17:15.472 [2024-11-09T16:26:35.242Z] =================================================================================================================== 00:17:15.472 [2024-11-09T16:26:35.242Z] Total : 5625.87 351.62 0.00 0.00 11204.17 452.20 24641.54 00:17:15.472 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2681951 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:15.472 17:26:35 -- target/host_management.sh@101 -- # stoptarget 00:17:15.472 17:26:35 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:15.472 17:26:35 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:15.472 17:26:35 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:15.472 17:26:35 -- target/host_management.sh@40 -- # nvmftestfini 00:17:15.472 17:26:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:15.472 17:26:35 -- nvmf/common.sh@116 -- # sync 00:17:15.472 17:26:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:15.472 17:26:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:15.472 17:26:35 -- nvmf/common.sh@119 -- # set +e 00:17:15.472 17:26:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:15.472 17:26:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:15.472 rmmod nvme_rdma 00:17:15.472 rmmod nvme_fabrics 00:17:15.472 17:26:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:15.472 17:26:35 -- nvmf/common.sh@123 -- # set -e 00:17:15.472 17:26:35 -- nvmf/common.sh@124 -- # return 0 00:17:15.472 17:26:35 -- nvmf/common.sh@477 -- # '[' -n 2681710 ']' 00:17:15.472 17:26:35 -- nvmf/common.sh@478 -- # killprocess 2681710 00:17:15.472 17:26:35 -- common/autotest_common.sh@936 -- # '[' -z 2681710 ']' 00:17:15.472 17:26:35 -- common/autotest_common.sh@940 -- # kill -0 2681710 00:17:15.472 17:26:35 -- common/autotest_common.sh@941 -- # uname 00:17:15.472 17:26:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.472 17:26:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2681710 00:17:15.472 17:26:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:15.472 17:26:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:15.472 17:26:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2681710' 00:17:15.472 killing process with pid 2681710 00:17:15.472 17:26:35 -- common/autotest_common.sh@955 -- # kill 2681710 00:17:15.472 17:26:35 -- common/autotest_common.sh@960 -- # wait 2681710 00:17:15.731 [2024-11-09 17:26:35.487139] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:15.989 17:26:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:15.989 17:26:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:15.989 00:17:15.989 real 0m5.161s 00:17:15.989 user 0m23.049s 00:17:15.989 sys 0m1.031s 00:17:15.989 17:26:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:15.989 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:17:15.989 ************************************ 00:17:15.989 END TEST nvmf_host_management 00:17:15.989 ************************************ 00:17:15.989 17:26:35 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:15.989 00:17:15.990 real 0m12.399s 00:17:15.990 user 0m25.067s 00:17:15.990 sys 0m6.479s 00:17:15.990 17:26:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:15.990 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 ************************************ 00:17:15.990 END TEST nvmf_host_management 00:17:15.990 ************************************ 00:17:15.990 17:26:35 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:15.990 17:26:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:15.990 17:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:15.990 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 ************************************ 00:17:15.990 START TEST nvmf_lvol 00:17:15.990 ************************************ 00:17:15.990 17:26:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:15.990 * Looking for test storage... 00:17:15.990 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:15.990 17:26:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:15.990 17:26:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:15.990 17:26:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:16.249 17:26:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:16.249 17:26:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:16.249 17:26:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:16.249 17:26:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:16.249 17:26:35 -- scripts/common.sh@335 -- # IFS=.-: 00:17:16.249 17:26:35 -- scripts/common.sh@335 -- # read -ra ver1 00:17:16.249 17:26:35 -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.249 17:26:35 -- scripts/common.sh@336 -- # read -ra ver2 00:17:16.249 17:26:35 -- scripts/common.sh@337 -- # local 'op=<' 00:17:16.249 17:26:35 -- scripts/common.sh@339 -- # ver1_l=2 00:17:16.249 17:26:35 -- scripts/common.sh@340 -- # ver2_l=1 00:17:16.249 17:26:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:16.249 17:26:35 -- scripts/common.sh@343 -- # case "$op" in 00:17:16.249 17:26:35 -- scripts/common.sh@344 -- # : 1 00:17:16.249 17:26:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:16.249 17:26:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.249 17:26:35 -- scripts/common.sh@364 -- # decimal 1 00:17:16.249 17:26:35 -- scripts/common.sh@352 -- # local d=1 00:17:16.249 17:26:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.249 17:26:35 -- scripts/common.sh@354 -- # echo 1 00:17:16.249 17:26:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:16.249 17:26:35 -- scripts/common.sh@365 -- # decimal 2 00:17:16.249 17:26:35 -- scripts/common.sh@352 -- # local d=2 00:17:16.249 17:26:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.249 17:26:35 -- scripts/common.sh@354 -- # echo 2 00:17:16.249 17:26:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:16.249 17:26:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:16.249 17:26:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:16.249 17:26:35 -- scripts/common.sh@367 -- # return 0 00:17:16.249 17:26:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.249 17:26:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.249 --rc genhtml_branch_coverage=1 00:17:16.249 --rc genhtml_function_coverage=1 00:17:16.249 --rc genhtml_legend=1 00:17:16.249 --rc geninfo_all_blocks=1 00:17:16.249 --rc geninfo_unexecuted_blocks=1 00:17:16.249 00:17:16.249 ' 00:17:16.249 17:26:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.249 --rc genhtml_branch_coverage=1 00:17:16.249 --rc genhtml_function_coverage=1 00:17:16.249 --rc genhtml_legend=1 00:17:16.249 --rc geninfo_all_blocks=1 00:17:16.249 --rc geninfo_unexecuted_blocks=1 00:17:16.249 00:17:16.249 ' 00:17:16.249 17:26:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.249 --rc genhtml_branch_coverage=1 00:17:16.249 --rc genhtml_function_coverage=1 00:17:16.249 --rc genhtml_legend=1 00:17:16.249 --rc geninfo_all_blocks=1 00:17:16.249 --rc geninfo_unexecuted_blocks=1 00:17:16.249 00:17:16.249 ' 00:17:16.249 17:26:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.249 --rc genhtml_branch_coverage=1 00:17:16.249 --rc genhtml_function_coverage=1 00:17:16.249 --rc genhtml_legend=1 00:17:16.249 --rc geninfo_all_blocks=1 00:17:16.249 --rc geninfo_unexecuted_blocks=1 00:17:16.249 00:17:16.249 ' 00:17:16.249 17:26:35 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.249 17:26:35 -- nvmf/common.sh@7 -- # uname -s 00:17:16.249 17:26:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.249 17:26:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.249 17:26:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.249 17:26:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.249 17:26:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.249 17:26:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.249 17:26:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.249 17:26:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.249 17:26:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.249 17:26:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.249 17:26:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:16.249 17:26:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:16.249 17:26:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.249 17:26:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.249 17:26:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.249 17:26:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:16.249 17:26:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.250 17:26:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.250 17:26:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.250 17:26:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.250 17:26:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.250 17:26:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.250 17:26:35 -- paths/export.sh@5 -- # export PATH 00:17:16.250 17:26:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.250 17:26:35 -- nvmf/common.sh@46 -- # : 0 00:17:16.250 17:26:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:16.250 17:26:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:16.250 17:26:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:16.250 17:26:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.250 17:26:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.250 17:26:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:16.250 17:26:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:16.250 17:26:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:16.250 17:26:35 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:16.250 17:26:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:16.250 17:26:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.250 17:26:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:16.250 17:26:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:16.250 17:26:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:16.250 17:26:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.250 17:26:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.250 17:26:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.250 17:26:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:16.250 17:26:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:16.250 17:26:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:16.250 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:17:22.820 17:26:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:22.820 17:26:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:22.820 17:26:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:22.820 17:26:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:22.820 17:26:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:22.820 17:26:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:22.820 17:26:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:22.820 17:26:42 -- nvmf/common.sh@294 -- # net_devs=() 00:17:22.820 17:26:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:22.820 17:26:42 -- nvmf/common.sh@295 -- # e810=() 00:17:22.820 17:26:42 -- nvmf/common.sh@295 -- # local -ga e810 00:17:22.820 17:26:42 -- nvmf/common.sh@296 -- # x722=() 00:17:22.820 17:26:42 -- nvmf/common.sh@296 -- # local -ga x722 00:17:22.820 17:26:42 -- nvmf/common.sh@297 -- # mlx=() 00:17:22.820 17:26:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:22.820 17:26:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.820 17:26:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:22.820 17:26:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:22.820 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:22.820 17:26:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.820 17:26:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:22.820 17:26:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:22.820 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:22.820 17:26:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.820 17:26:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:22.820 17:26:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.820 17:26:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.820 17:26:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:22.820 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:22.820 17:26:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:22.820 17:26:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.820 17:26:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.820 17:26:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:22.820 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:22.820 17:26:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.820 17:26:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:22.820 17:26:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:22.820 17:26:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:22.820 17:26:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:22.820 17:26:42 -- nvmf/common.sh@57 -- # uname 00:17:22.820 17:26:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:22.820 17:26:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:22.820 17:26:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:22.820 17:26:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:22.820 17:26:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:22.820 17:26:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:22.820 17:26:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:22.820 17:26:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:22.820 17:26:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:22.820 17:26:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.820 17:26:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:22.820 17:26:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.820 17:26:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:22.820 17:26:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:22.820 17:26:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.820 17:26:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:22.820 17:26:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.821 17:26:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.821 17:26:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:22.821 17:26:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.821 17:26:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:22.821 17:26:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:22.821 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.821 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:22.821 altname enp217s0f0np0 00:17:22.821 altname ens818f0np0 00:17:22.821 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.821 valid_lft forever preferred_lft forever 00:17:22.821 17:26:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:22.821 17:26:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.821 17:26:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:22.821 17:26:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:22.821 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.821 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:22.821 altname enp217s0f1np1 00:17:22.821 altname ens818f1np1 00:17:22.821 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.821 valid_lft forever preferred_lft forever 00:17:22.821 17:26:42 -- nvmf/common.sh@410 -- # return 0 00:17:22.821 17:26:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.821 17:26:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:22.821 17:26:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:22.821 17:26:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:22.821 17:26:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.821 17:26:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:22.821 17:26:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:22.821 17:26:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.821 17:26:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:22.821 17:26:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.821 17:26:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.821 17:26:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.821 17:26:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.821 17:26:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:22.821 17:26:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.821 17:26:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:22.821 17:26:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.821 17:26:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.821 17:26:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:22.821 192.168.100.9' 00:17:22.821 17:26:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:22.821 192.168.100.9' 00:17:22.821 17:26:42 -- nvmf/common.sh@445 -- # head -n 1 00:17:22.821 17:26:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:22.821 17:26:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:22.821 192.168.100.9' 00:17:22.821 17:26:42 -- nvmf/common.sh@446 -- # tail -n +2 00:17:22.821 17:26:42 -- nvmf/common.sh@446 -- # head -n 1 00:17:22.821 17:26:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:22.821 17:26:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:22.821 17:26:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:22.821 17:26:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:22.821 17:26:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:22.821 17:26:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:22.821 17:26:42 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:22.821 17:26:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.821 17:26:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.821 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:17:22.821 17:26:42 -- nvmf/common.sh@469 -- # nvmfpid=2685984 00:17:22.821 17:26:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:22.821 17:26:42 -- nvmf/common.sh@470 -- # waitforlisten 2685984 00:17:22.821 17:26:42 -- common/autotest_common.sh@829 -- # '[' -z 2685984 ']' 00:17:22.821 17:26:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.821 17:26:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.821 17:26:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.821 17:26:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.821 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:17:22.821 [2024-11-09 17:26:42.397299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.821 [2024-11-09 17:26:42.397346] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.821 [2024-11-09 17:26:42.466247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:22.821 [2024-11-09 17:26:42.537841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.821 [2024-11-09 17:26:42.537955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.821 [2024-11-09 17:26:42.537965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.821 [2024-11-09 17:26:42.537974] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.821 [2024-11-09 17:26:42.538027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.821 [2024-11-09 17:26:42.538127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.821 [2024-11-09 17:26:42.538127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.759 17:26:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.759 17:26:43 -- common/autotest_common.sh@862 -- # return 0 00:17:23.759 17:26:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.759 17:26:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.759 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:17:23.759 17:26:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.759 17:26:43 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:23.759 [2024-11-09 17:26:43.440611] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x251a560/0x251ea50) succeed. 00:17:23.759 [2024-11-09 17:26:43.449753] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x251bab0/0x25600f0) succeed. 00:17:24.019 17:26:43 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.019 17:26:43 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:24.019 17:26:43 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.278 17:26:43 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:24.278 17:26:43 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:24.538 17:26:44 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:24.797 17:26:44 -- target/nvmf_lvol.sh@29 -- # lvs=a4a967d9-c977-46ce-99b4-5c568138f937 00:17:24.797 17:26:44 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4a967d9-c977-46ce-99b4-5c568138f937 lvol 20 00:17:24.797 17:26:44 -- target/nvmf_lvol.sh@32 -- # lvol=0554ea58-2217-4da4-86bf-5f7c9ea668c1 00:17:24.797 17:26:44 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:25.057 17:26:44 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0554ea58-2217-4da4-86bf-5f7c9ea668c1 00:17:25.316 17:26:44 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:25.316 [2024-11-09 17:26:45.053103] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.316 17:26:45 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:25.575 17:26:45 -- target/nvmf_lvol.sh@42 -- # perf_pid=2686556 00:17:25.575 17:26:45 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:25.575 17:26:45 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:25.575 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.512 17:26:46 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0554ea58-2217-4da4-86bf-5f7c9ea668c1 MY_SNAPSHOT 00:17:26.771 17:26:46 -- target/nvmf_lvol.sh@47 -- # snapshot=2272db8b-9893-4fc3-bfbf-a9889f1c5713 00:17:26.771 17:26:46 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0554ea58-2217-4da4-86bf-5f7c9ea668c1 30 00:17:27.030 17:26:46 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2272db8b-9893-4fc3-bfbf-a9889f1c5713 MY_CLONE 00:17:27.289 17:26:46 -- target/nvmf_lvol.sh@49 -- # clone=9ab68103-a2ac-48ef-9209-a3a3ef4473fb 00:17:27.289 17:26:46 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9ab68103-a2ac-48ef-9209-a3a3ef4473fb 00:17:27.548 17:26:47 -- target/nvmf_lvol.sh@53 -- # wait 2686556 00:17:37.533 Initializing NVMe Controllers 00:17:37.533 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:37.533 Controller IO queue size 128, less than required. 00:17:37.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.533 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:37.533 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:37.533 Initialization complete. Launching workers. 00:17:37.533 ======================================================== 00:17:37.533 Latency(us) 00:17:37.533 Device Information : IOPS MiB/s Average min max 00:17:37.533 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17018.70 66.48 7523.48 2035.53 37345.52 00:17:37.533 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16949.30 66.21 7553.73 3172.43 35271.00 00:17:37.533 ======================================================== 00:17:37.533 Total : 33968.00 132.69 7538.57 2035.53 37345.52 00:17:37.533 00:17:37.533 17:26:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.533 17:26:56 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0554ea58-2217-4da4-86bf-5f7c9ea668c1 00:17:37.533 17:26:56 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4a967d9-c977-46ce-99b4-5c568138f937 00:17:37.533 17:26:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:37.533 17:26:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:37.533 17:26:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:37.533 17:26:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:37.533 17:26:57 -- nvmf/common.sh@116 -- # sync 00:17:37.533 17:26:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:37.533 17:26:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:37.533 17:26:57 -- nvmf/common.sh@119 -- # set +e 00:17:37.533 17:26:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:37.533 17:26:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:37.533 rmmod nvme_rdma 00:17:37.533 rmmod nvme_fabrics 00:17:37.533 17:26:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:37.533 17:26:57 -- nvmf/common.sh@123 -- # set -e 00:17:37.533 17:26:57 -- nvmf/common.sh@124 -- # return 0 00:17:37.533 17:26:57 -- nvmf/common.sh@477 -- # '[' -n 2685984 ']' 00:17:37.533 17:26:57 -- nvmf/common.sh@478 -- # killprocess 2685984 00:17:37.533 17:26:57 -- common/autotest_common.sh@936 -- # '[' -z 2685984 ']' 00:17:37.533 17:26:57 -- common/autotest_common.sh@940 -- # kill -0 2685984 00:17:37.533 17:26:57 -- common/autotest_common.sh@941 -- # uname 00:17:37.533 17:26:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.533 17:26:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2685984 00:17:37.793 17:26:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:37.793 17:26:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:37.793 17:26:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2685984' 00:17:37.793 killing process with pid 2685984 00:17:37.793 17:26:57 -- common/autotest_common.sh@955 -- # kill 2685984 00:17:37.793 17:26:57 -- common/autotest_common.sh@960 -- # wait 2685984 00:17:38.053 17:26:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.053 17:26:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:38.053 00:17:38.053 real 0m22.030s 00:17:38.053 user 1m11.620s 00:17:38.053 sys 0m6.226s 00:17:38.053 17:26:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:38.053 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 ************************************ 00:17:38.053 END TEST nvmf_lvol 00:17:38.053 ************************************ 00:17:38.053 17:26:57 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:38.053 17:26:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:38.053 17:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.053 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 ************************************ 00:17:38.053 START TEST nvmf_lvs_grow 00:17:38.053 ************************************ 00:17:38.053 17:26:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:38.053 * Looking for test storage... 00:17:38.053 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:38.053 17:26:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:38.053 17:26:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:38.053 17:26:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:38.312 17:26:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:38.312 17:26:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:38.312 17:26:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:38.312 17:26:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:38.312 17:26:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:38.312 17:26:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:38.312 17:26:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.312 17:26:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:38.312 17:26:57 -- scripts/common.sh@337 -- # local 'op=<' 00:17:38.312 17:26:57 -- scripts/common.sh@339 -- # ver1_l=2 00:17:38.312 17:26:57 -- scripts/common.sh@340 -- # ver2_l=1 00:17:38.312 17:26:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:38.312 17:26:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:38.312 17:26:57 -- scripts/common.sh@344 -- # : 1 00:17:38.313 17:26:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:38.313 17:26:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.313 17:26:57 -- scripts/common.sh@364 -- # decimal 1 00:17:38.313 17:26:57 -- scripts/common.sh@352 -- # local d=1 00:17:38.313 17:26:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.313 17:26:57 -- scripts/common.sh@354 -- # echo 1 00:17:38.313 17:26:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:38.313 17:26:57 -- scripts/common.sh@365 -- # decimal 2 00:17:38.313 17:26:57 -- scripts/common.sh@352 -- # local d=2 00:17:38.313 17:26:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.313 17:26:57 -- scripts/common.sh@354 -- # echo 2 00:17:38.313 17:26:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:38.313 17:26:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:38.313 17:26:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:38.313 17:26:57 -- scripts/common.sh@367 -- # return 0 00:17:38.313 17:26:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.313 17:26:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:38.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.313 --rc genhtml_branch_coverage=1 00:17:38.313 --rc genhtml_function_coverage=1 00:17:38.313 --rc genhtml_legend=1 00:17:38.313 --rc geninfo_all_blocks=1 00:17:38.313 --rc geninfo_unexecuted_blocks=1 00:17:38.313 00:17:38.313 ' 00:17:38.313 17:26:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:38.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.313 --rc genhtml_branch_coverage=1 00:17:38.313 --rc genhtml_function_coverage=1 00:17:38.313 --rc genhtml_legend=1 00:17:38.313 --rc geninfo_all_blocks=1 00:17:38.313 --rc geninfo_unexecuted_blocks=1 00:17:38.313 00:17:38.313 ' 00:17:38.313 17:26:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:38.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.313 --rc genhtml_branch_coverage=1 00:17:38.313 --rc genhtml_function_coverage=1 00:17:38.313 --rc genhtml_legend=1 00:17:38.313 --rc geninfo_all_blocks=1 00:17:38.313 --rc geninfo_unexecuted_blocks=1 00:17:38.313 00:17:38.313 ' 00:17:38.313 17:26:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:38.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.313 --rc genhtml_branch_coverage=1 00:17:38.313 --rc genhtml_function_coverage=1 00:17:38.313 --rc genhtml_legend=1 00:17:38.313 --rc geninfo_all_blocks=1 00:17:38.313 --rc geninfo_unexecuted_blocks=1 00:17:38.313 00:17:38.313 ' 00:17:38.313 17:26:57 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.313 17:26:57 -- nvmf/common.sh@7 -- # uname -s 00:17:38.313 17:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.313 17:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.313 17:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.313 17:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.313 17:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.313 17:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.313 17:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.313 17:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.313 17:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.313 17:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.313 17:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:38.313 17:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:38.313 17:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.313 17:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.313 17:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.313 17:26:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:38.313 17:26:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.313 17:26:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.313 17:26:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.313 17:26:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.313 17:26:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.313 17:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.313 17:26:57 -- paths/export.sh@5 -- # export PATH 00:17:38.313 17:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.313 17:26:57 -- nvmf/common.sh@46 -- # : 0 00:17:38.313 17:26:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:38.313 17:26:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:38.313 17:26:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:38.313 17:26:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.313 17:26:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.313 17:26:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:38.313 17:26:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:38.313 17:26:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:38.313 17:26:57 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:38.313 17:26:57 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.313 17:26:57 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:38.313 17:26:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:38.313 17:26:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.313 17:26:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:38.313 17:26:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:38.313 17:26:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:38.313 17:26:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.313 17:26:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.313 17:26:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.313 17:26:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:38.313 17:26:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:38.313 17:26:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:38.313 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.885 17:27:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:44.885 17:27:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:44.885 17:27:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:44.885 17:27:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:44.885 17:27:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:44.885 17:27:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:44.885 17:27:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:44.885 17:27:04 -- nvmf/common.sh@294 -- # net_devs=() 00:17:44.885 17:27:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:44.885 17:27:04 -- nvmf/common.sh@295 -- # e810=() 00:17:44.885 17:27:04 -- nvmf/common.sh@295 -- # local -ga e810 00:17:44.885 17:27:04 -- nvmf/common.sh@296 -- # x722=() 00:17:44.885 17:27:04 -- nvmf/common.sh@296 -- # local -ga x722 00:17:44.885 17:27:04 -- nvmf/common.sh@297 -- # mlx=() 00:17:44.885 17:27:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:44.885 17:27:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.885 17:27:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:44.885 17:27:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:44.885 17:27:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:44.885 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:44.885 17:27:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.885 17:27:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:44.885 17:27:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:44.885 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:44.885 17:27:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.885 17:27:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:44.885 17:27:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:44.885 17:27:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.885 17:27:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:44.885 17:27:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.885 17:27:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:44.885 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:44.885 17:27:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:44.885 17:27:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.885 17:27:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:44.885 17:27:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.885 17:27:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:44.885 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:44.885 17:27:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.885 17:27:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:44.885 17:27:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:44.885 17:27:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:44.885 17:27:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:44.885 17:27:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:44.885 17:27:04 -- nvmf/common.sh@57 -- # uname 00:17:44.885 17:27:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:44.885 17:27:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:44.885 17:27:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:44.885 17:27:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:44.885 17:27:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:44.885 17:27:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:44.885 17:27:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:44.885 17:27:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:44.885 17:27:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:44.885 17:27:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:44.885 17:27:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:44.885 17:27:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.885 17:27:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:44.885 17:27:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:44.885 17:27:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.145 17:27:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.145 17:27:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.145 17:27:04 -- nvmf/common.sh@104 -- # continue 2 00:17:45.145 17:27:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.145 17:27:04 -- nvmf/common.sh@104 -- # continue 2 00:17:45.145 17:27:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.145 17:27:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:45.145 17:27:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.145 17:27:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:45.145 17:27:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:45.145 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.145 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:45.145 altname enp217s0f0np0 00:17:45.145 altname ens818f0np0 00:17:45.145 inet 192.168.100.8/24 scope global mlx_0_0 00:17:45.145 valid_lft forever preferred_lft forever 00:17:45.145 17:27:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.145 17:27:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:45.145 17:27:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.145 17:27:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.145 17:27:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:45.145 17:27:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:45.145 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.145 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:45.145 altname enp217s0f1np1 00:17:45.145 altname ens818f1np1 00:17:45.145 inet 192.168.100.9/24 scope global mlx_0_1 00:17:45.145 valid_lft forever preferred_lft forever 00:17:45.145 17:27:04 -- nvmf/common.sh@410 -- # return 0 00:17:45.145 17:27:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.145 17:27:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:45.145 17:27:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:45.145 17:27:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:45.145 17:27:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:45.145 17:27:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:45.145 17:27:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:45.145 17:27:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.145 17:27:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.145 17:27:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.145 17:27:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.145 17:27:04 -- nvmf/common.sh@104 -- # continue 2 00:17:45.145 17:27:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.145 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.146 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.146 17:27:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.146 17:27:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.146 17:27:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.146 17:27:04 -- nvmf/common.sh@104 -- # continue 2 00:17:45.146 17:27:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.146 17:27:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:45.146 17:27:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.146 17:27:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.146 17:27:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:45.146 17:27:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.146 17:27:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.146 17:27:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:45.146 192.168.100.9' 00:17:45.146 17:27:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:45.146 192.168.100.9' 00:17:45.146 17:27:04 -- nvmf/common.sh@445 -- # head -n 1 00:17:45.146 17:27:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:45.146 17:27:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:45.146 192.168.100.9' 00:17:45.146 17:27:04 -- nvmf/common.sh@446 -- # tail -n +2 00:17:45.146 17:27:04 -- nvmf/common.sh@446 -- # head -n 1 00:17:45.146 17:27:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:45.146 17:27:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:45.146 17:27:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:45.146 17:27:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:45.146 17:27:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:45.146 17:27:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:45.146 17:27:04 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:45.146 17:27:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.146 17:27:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.146 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:17:45.146 17:27:04 -- nvmf/common.sh@469 -- # nvmfpid=2692275 00:17:45.146 17:27:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:45.146 17:27:04 -- nvmf/common.sh@470 -- # waitforlisten 2692275 00:17:45.146 17:27:04 -- common/autotest_common.sh@829 -- # '[' -z 2692275 ']' 00:17:45.146 17:27:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.146 17:27:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.146 17:27:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.146 17:27:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.146 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:17:45.146 [2024-11-09 17:27:04.862244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.146 [2024-11-09 17:27:04.862300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.146 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.404 [2024-11-09 17:27:04.932170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.404 [2024-11-09 17:27:05.010004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.405 [2024-11-09 17:27:05.010110] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.405 [2024-11-09 17:27:05.010120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.405 [2024-11-09 17:27:05.010129] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.405 [2024-11-09 17:27:05.010150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.973 17:27:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.973 17:27:05 -- common/autotest_common.sh@862 -- # return 0 00:17:45.973 17:27:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:45.973 17:27:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.973 17:27:05 -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 17:27:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.973 17:27:05 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:46.232 [2024-11-09 17:27:05.895365] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x207cf30/0x2081420) succeed. 00:17:46.232 [2024-11-09 17:27:05.904437] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x207e430/0x20c2ac0) succeed. 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:46.232 17:27:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:46.232 17:27:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.232 17:27:05 -- common/autotest_common.sh@10 -- # set +x 00:17:46.232 ************************************ 00:17:46.232 START TEST lvs_grow_clean 00:17:46.232 ************************************ 00:17:46.232 17:27:05 -- common/autotest_common.sh@1114 -- # lvs_grow 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.232 17:27:05 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:46.490 17:27:06 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:46.490 17:27:06 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b7436a5d-e75a-4e80-8603-85d46e3745a5 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:46.750 17:27:06 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7436a5d-e75a-4e80-8603-85d46e3745a5 lvol 150 00:17:47.009 17:27:06 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e3c144e-4535-4db1-97cb-9b605dd2365a 00:17:47.009 17:27:06 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.009 17:27:06 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:47.268 [2024-11-09 17:27:06.859731] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:47.268 [2024-11-09 17:27:06.859783] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:47.268 true 00:17:47.268 17:27:06 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:17:47.268 17:27:06 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:47.527 17:27:07 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:47.527 17:27:07 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:47.527 17:27:07 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e3c144e-4535-4db1-97cb-9b605dd2365a 00:17:47.786 17:27:07 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:47.786 [2024-11-09 17:27:07.537946] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:47.786 17:27:07 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:48.045 17:27:07 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2693026 00:17:48.045 17:27:07 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:48.045 17:27:07 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.045 17:27:07 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2693026 /var/tmp/bdevperf.sock 00:17:48.045 17:27:07 -- common/autotest_common.sh@829 -- # '[' -z 2693026 ']' 00:17:48.045 17:27:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.045 17:27:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.045 17:27:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.045 17:27:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.045 17:27:07 -- common/autotest_common.sh@10 -- # set +x 00:17:48.045 [2024-11-09 17:27:07.768279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.045 [2024-11-09 17:27:07.768336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693026 ] 00:17:48.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.305 [2024-11-09 17:27:07.836713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.305 [2024-11-09 17:27:07.909951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.872 17:27:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.872 17:27:08 -- common/autotest_common.sh@862 -- # return 0 00:17:48.872 17:27:08 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:49.131 Nvme0n1 00:17:49.131 17:27:08 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:49.390 [ 00:17:49.390 { 00:17:49.390 "name": "Nvme0n1", 00:17:49.390 "aliases": [ 00:17:49.390 "7e3c144e-4535-4db1-97cb-9b605dd2365a" 00:17:49.390 ], 00:17:49.390 "product_name": "NVMe disk", 00:17:49.390 "block_size": 4096, 00:17:49.390 "num_blocks": 38912, 00:17:49.390 "uuid": "7e3c144e-4535-4db1-97cb-9b605dd2365a", 00:17:49.390 "assigned_rate_limits": { 00:17:49.390 "rw_ios_per_sec": 0, 00:17:49.390 "rw_mbytes_per_sec": 0, 00:17:49.390 "r_mbytes_per_sec": 0, 00:17:49.390 "w_mbytes_per_sec": 0 00:17:49.390 }, 00:17:49.390 "claimed": false, 00:17:49.390 "zoned": false, 00:17:49.390 "supported_io_types": { 00:17:49.390 "read": true, 00:17:49.390 "write": true, 00:17:49.390 "unmap": true, 00:17:49.390 "write_zeroes": true, 00:17:49.390 "flush": true, 00:17:49.390 "reset": true, 00:17:49.390 "compare": true, 00:17:49.390 "compare_and_write": true, 00:17:49.390 "abort": true, 00:17:49.390 "nvme_admin": true, 00:17:49.390 "nvme_io": true 00:17:49.390 }, 00:17:49.390 "memory_domains": [ 00:17:49.390 { 00:17:49.390 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:49.390 "dma_device_type": 0 00:17:49.390 } 00:17:49.390 ], 00:17:49.390 "driver_specific": { 00:17:49.390 "nvme": [ 00:17:49.390 { 00:17:49.390 "trid": { 00:17:49.390 "trtype": "RDMA", 00:17:49.390 "adrfam": "IPv4", 00:17:49.390 "traddr": "192.168.100.8", 00:17:49.390 "trsvcid": "4420", 00:17:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:49.390 }, 00:17:49.390 "ctrlr_data": { 00:17:49.390 "cntlid": 1, 00:17:49.390 "vendor_id": "0x8086", 00:17:49.390 "model_number": "SPDK bdev Controller", 00:17:49.390 "serial_number": "SPDK0", 00:17:49.390 "firmware_revision": "24.01.1", 00:17:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.390 "oacs": { 00:17:49.390 "security": 0, 00:17:49.390 "format": 0, 00:17:49.390 "firmware": 0, 00:17:49.391 "ns_manage": 0 00:17:49.391 }, 00:17:49.391 "multi_ctrlr": true, 00:17:49.391 "ana_reporting": false 00:17:49.391 }, 00:17:49.391 "vs": { 00:17:49.391 "nvme_version": "1.3" 00:17:49.391 }, 00:17:49.391 "ns_data": { 00:17:49.391 "id": 1, 00:17:49.391 "can_share": true 00:17:49.391 } 00:17:49.391 } 00:17:49.391 ], 00:17:49.391 "mp_policy": "active_passive" 00:17:49.391 } 00:17:49.391 } 00:17:49.391 ] 00:17:49.391 17:27:09 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2693301 00:17:49.391 17:27:09 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:49.391 17:27:09 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:49.391 Running I/O for 10 seconds... 00:17:50.767 Latency(us) 00:17:50.767 [2024-11-09T16:27:10.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.767 [2024-11-09T16:27:10.537Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.767 Nvme0n1 : 1.00 36224.00 141.50 0.00 0.00 0.00 0.00 0.00 00:17:50.767 [2024-11-09T16:27:10.537Z] =================================================================================================================== 00:17:50.767 [2024-11-09T16:27:10.537Z] Total : 36224.00 141.50 0.00 0.00 0.00 0.00 0.00 00:17:50.767 00:17:51.335 17:27:11 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:17:51.593 [2024-11-09T16:27:11.363Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.594 Nvme0n1 : 2.00 36322.50 141.88 0.00 0.00 0.00 0.00 0.00 00:17:51.594 [2024-11-09T16:27:11.364Z] =================================================================================================================== 00:17:51.594 [2024-11-09T16:27:11.364Z] Total : 36322.50 141.88 0.00 0.00 0.00 0.00 0.00 00:17:51.594 00:17:51.594 true 00:17:51.594 17:27:11 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:17:51.594 17:27:11 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:51.852 17:27:11 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:51.852 17:27:11 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:51.852 17:27:11 -- target/nvmf_lvs_grow.sh@65 -- # wait 2693301 00:17:52.420 [2024-11-09T16:27:12.190Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.420 Nvme0n1 : 3.00 36609.00 143.00 0.00 0.00 0.00 0.00 0.00 00:17:52.420 [2024-11-09T16:27:12.190Z] =================================================================================================================== 00:17:52.420 [2024-11-09T16:27:12.190Z] Total : 36609.00 143.00 0.00 0.00 0.00 0.00 0.00 00:17:52.420 00:17:53.436 [2024-11-09T16:27:13.206Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.436 Nvme0n1 : 4.00 36824.00 143.84 0.00 0.00 0.00 0.00 0.00 00:17:53.436 [2024-11-09T16:27:13.206Z] =================================================================================================================== 00:17:53.436 [2024-11-09T16:27:13.206Z] Total : 36824.00 143.84 0.00 0.00 0.00 0.00 0.00 00:17:53.436 00:17:54.389 [2024-11-09T16:27:14.159Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.389 Nvme0n1 : 5.00 36972.80 144.43 0.00 0.00 0.00 0.00 0.00 00:17:54.389 [2024-11-09T16:27:14.159Z] =================================================================================================================== 00:17:54.389 [2024-11-09T16:27:14.159Z] Total : 36972.80 144.43 0.00 0.00 0.00 0.00 0.00 00:17:54.389 00:17:55.766 [2024-11-09T16:27:15.536Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.766 Nvme0n1 : 6.00 37067.00 144.79 0.00 0.00 0.00 0.00 0.00 00:17:55.766 [2024-11-09T16:27:15.536Z] =================================================================================================================== 00:17:55.766 [2024-11-09T16:27:15.536Z] Total : 37067.00 144.79 0.00 0.00 0.00 0.00 0.00 00:17:55.766 00:17:56.702 [2024-11-09T16:27:16.472Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.702 Nvme0n1 : 7.00 37161.29 145.16 0.00 0.00 0.00 0.00 0.00 00:17:56.702 [2024-11-09T16:27:16.472Z] =================================================================================================================== 00:17:56.702 [2024-11-09T16:27:16.472Z] Total : 37161.29 145.16 0.00 0.00 0.00 0.00 0.00 00:17:56.702 00:17:57.639 [2024-11-09T16:27:17.409Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.639 Nvme0n1 : 8.00 37176.25 145.22 0.00 0.00 0.00 0.00 0.00 00:17:57.639 [2024-11-09T16:27:17.409Z] =================================================================================================================== 00:17:57.639 [2024-11-09T16:27:17.409Z] Total : 37176.25 145.22 0.00 0.00 0.00 0.00 0.00 00:17:57.639 00:17:58.576 [2024-11-09T16:27:18.346Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.576 Nvme0n1 : 9.00 37226.89 145.42 0.00 0.00 0.00 0.00 0.00 00:17:58.576 [2024-11-09T16:27:18.346Z] =================================================================================================================== 00:17:58.576 [2024-11-09T16:27:18.346Z] Total : 37226.89 145.42 0.00 0.00 0.00 0.00 0.00 00:17:58.576 00:17:59.513 [2024-11-09T16:27:19.283Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.513 Nvme0n1 : 10.00 37273.60 145.60 0.00 0.00 0.00 0.00 0.00 00:17:59.513 [2024-11-09T16:27:19.283Z] =================================================================================================================== 00:17:59.513 [2024-11-09T16:27:19.283Z] Total : 37273.60 145.60 0.00 0.00 0.00 0.00 0.00 00:17:59.513 00:17:59.513 00:17:59.513 Latency(us) 00:17:59.513 [2024-11-09T16:27:19.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.513 [2024-11-09T16:27:19.283Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.513 Nvme0n1 : 10.00 37274.36 145.60 0.00 0.00 3431.23 2555.90 15623.78 00:17:59.513 [2024-11-09T16:27:19.283Z] =================================================================================================================== 00:17:59.513 [2024-11-09T16:27:19.283Z] Total : 37274.36 145.60 0.00 0.00 3431.23 2555.90 15623.78 00:17:59.513 0 00:17:59.513 17:27:19 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2693026 00:17:59.513 17:27:19 -- common/autotest_common.sh@936 -- # '[' -z 2693026 ']' 00:17:59.513 17:27:19 -- common/autotest_common.sh@940 -- # kill -0 2693026 00:17:59.513 17:27:19 -- common/autotest_common.sh@941 -- # uname 00:17:59.513 17:27:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.513 17:27:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2693026 00:17:59.513 17:27:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:59.513 17:27:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:59.513 17:27:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2693026' 00:17:59.513 killing process with pid 2693026 00:17:59.513 17:27:19 -- common/autotest_common.sh@955 -- # kill 2693026 00:17:59.513 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.513 00:17:59.513 Latency(us) 00:17:59.513 [2024-11-09T16:27:19.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.513 [2024-11-09T16:27:19.283Z] =================================================================================================================== 00:17:59.513 [2024-11-09T16:27:19.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.513 17:27:19 -- common/autotest_common.sh@960 -- # wait 2693026 00:17:59.772 17:27:19 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.034 17:27:19 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:00.034 17:27:19 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:00.294 17:27:19 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:00.294 17:27:19 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:00.294 17:27:19 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:00.294 [2024-11-09 17:27:19.985300] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:00.294 17:27:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:00.294 17:27:20 -- common/autotest_common.sh@650 -- # local es=0 00:18:00.294 17:27:20 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:00.294 17:27:20 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:00.294 17:27:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.294 17:27:20 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:00.294 17:27:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.294 17:27:20 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:00.294 17:27:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.294 17:27:20 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:00.294 17:27:20 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:00.294 17:27:20 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:00.552 request: 00:18:00.552 { 00:18:00.552 "uuid": "b7436a5d-e75a-4e80-8603-85d46e3745a5", 00:18:00.552 "method": "bdev_lvol_get_lvstores", 00:18:00.552 "req_id": 1 00:18:00.552 } 00:18:00.552 Got JSON-RPC error response 00:18:00.552 response: 00:18:00.552 { 00:18:00.552 "code": -19, 00:18:00.552 "message": "No such device" 00:18:00.552 } 00:18:00.552 17:27:20 -- common/autotest_common.sh@653 -- # es=1 00:18:00.552 17:27:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.552 17:27:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.552 17:27:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.552 17:27:20 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:00.811 aio_bdev 00:18:00.811 17:27:20 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7e3c144e-4535-4db1-97cb-9b605dd2365a 00:18:00.811 17:27:20 -- common/autotest_common.sh@897 -- # local bdev_name=7e3c144e-4535-4db1-97cb-9b605dd2365a 00:18:00.811 17:27:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.811 17:27:20 -- common/autotest_common.sh@899 -- # local i 00:18:00.811 17:27:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.811 17:27:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.811 17:27:20 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:00.811 17:27:20 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e3c144e-4535-4db1-97cb-9b605dd2365a -t 2000 00:18:01.071 [ 00:18:01.071 { 00:18:01.071 "name": "7e3c144e-4535-4db1-97cb-9b605dd2365a", 00:18:01.071 "aliases": [ 00:18:01.071 "lvs/lvol" 00:18:01.071 ], 00:18:01.071 "product_name": "Logical Volume", 00:18:01.071 "block_size": 4096, 00:18:01.071 "num_blocks": 38912, 00:18:01.071 "uuid": "7e3c144e-4535-4db1-97cb-9b605dd2365a", 00:18:01.071 "assigned_rate_limits": { 00:18:01.071 "rw_ios_per_sec": 0, 00:18:01.071 "rw_mbytes_per_sec": 0, 00:18:01.071 "r_mbytes_per_sec": 0, 00:18:01.071 "w_mbytes_per_sec": 0 00:18:01.071 }, 00:18:01.071 "claimed": false, 00:18:01.071 "zoned": false, 00:18:01.071 "supported_io_types": { 00:18:01.071 "read": true, 00:18:01.071 "write": true, 00:18:01.071 "unmap": true, 00:18:01.071 "write_zeroes": true, 00:18:01.071 "flush": false, 00:18:01.071 "reset": true, 00:18:01.071 "compare": false, 00:18:01.071 "compare_and_write": false, 00:18:01.071 "abort": false, 00:18:01.071 "nvme_admin": false, 00:18:01.071 "nvme_io": false 00:18:01.071 }, 00:18:01.071 "driver_specific": { 00:18:01.071 "lvol": { 00:18:01.071 "lvol_store_uuid": "b7436a5d-e75a-4e80-8603-85d46e3745a5", 00:18:01.071 "base_bdev": "aio_bdev", 00:18:01.071 "thin_provision": false, 00:18:01.071 "snapshot": false, 00:18:01.071 "clone": false, 00:18:01.071 "esnap_clone": false 00:18:01.071 } 00:18:01.071 } 00:18:01.071 } 00:18:01.071 ] 00:18:01.071 17:27:20 -- common/autotest_common.sh@905 -- # return 0 00:18:01.071 17:27:20 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:01.071 17:27:20 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:01.329 17:27:20 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:01.329 17:27:20 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:01.329 17:27:20 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:01.588 17:27:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:01.588 17:27:21 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e3c144e-4535-4db1-97cb-9b605dd2365a 00:18:01.588 17:27:21 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7436a5d-e75a-4e80-8603-85d46e3745a5 00:18:01.846 17:27:21 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.105 00:18:02.105 real 0m15.687s 00:18:02.105 user 0m15.665s 00:18:02.105 sys 0m1.124s 00:18:02.105 17:27:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.105 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:18:02.105 ************************************ 00:18:02.105 END TEST lvs_grow_clean 00:18:02.105 ************************************ 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:02.105 17:27:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:02.105 17:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.105 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:18:02.105 ************************************ 00:18:02.105 START TEST lvs_grow_dirty 00:18:02.105 ************************************ 00:18:02.105 17:27:21 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.105 17:27:21 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.364 17:27:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:02.364 17:27:21 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:02.364 17:27:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1f959b17-2866-4f6f-b366-8660385071e5 00:18:02.364 17:27:22 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:02.364 17:27:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:02.622 17:27:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:02.622 17:27:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:02.622 17:27:22 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1f959b17-2866-4f6f-b366-8660385071e5 lvol 150 00:18:02.881 17:27:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:02.881 17:27:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.881 17:27:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:02.881 [2024-11-09 17:27:22.612002] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:02.881 [2024-11-09 17:27:22.612054] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:02.881 true 00:18:02.881 17:27:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:02.881 17:27:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:03.140 17:27:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:03.140 17:27:22 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:03.399 17:27:22 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:03.399 17:27:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:03.659 17:27:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:03.918 17:27:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:03.918 17:27:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2695798 00:18:03.918 17:27:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.918 17:27:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2695798 /var/tmp/bdevperf.sock 00:18:03.918 17:27:23 -- common/autotest_common.sh@829 -- # '[' -z 2695798 ']' 00:18:03.918 17:27:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.918 17:27:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.918 17:27:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.918 17:27:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.918 17:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:03.918 [2024-11-09 17:27:23.523661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:03.918 [2024-11-09 17:27:23.523706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695798 ] 00:18:03.918 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.918 [2024-11-09 17:27:23.591522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.918 [2024-11-09 17:27:23.663043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.854 17:27:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.854 17:27:24 -- common/autotest_common.sh@862 -- # return 0 00:18:04.854 17:27:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:04.854 Nvme0n1 00:18:04.854 17:27:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:05.113 [ 00:18:05.113 { 00:18:05.113 "name": "Nvme0n1", 00:18:05.113 "aliases": [ 00:18:05.113 "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9" 00:18:05.114 ], 00:18:05.114 "product_name": "NVMe disk", 00:18:05.114 "block_size": 4096, 00:18:05.114 "num_blocks": 38912, 00:18:05.114 "uuid": "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9", 00:18:05.114 "assigned_rate_limits": { 00:18:05.114 "rw_ios_per_sec": 0, 00:18:05.114 "rw_mbytes_per_sec": 0, 00:18:05.114 "r_mbytes_per_sec": 0, 00:18:05.114 "w_mbytes_per_sec": 0 00:18:05.114 }, 00:18:05.114 "claimed": false, 00:18:05.114 "zoned": false, 00:18:05.114 "supported_io_types": { 00:18:05.114 "read": true, 00:18:05.114 "write": true, 00:18:05.114 "unmap": true, 00:18:05.114 "write_zeroes": true, 00:18:05.114 "flush": true, 00:18:05.114 "reset": true, 00:18:05.114 "compare": true, 00:18:05.114 "compare_and_write": true, 00:18:05.114 "abort": true, 00:18:05.114 "nvme_admin": true, 00:18:05.114 "nvme_io": true 00:18:05.114 }, 00:18:05.114 "memory_domains": [ 00:18:05.114 { 00:18:05.114 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:05.114 "dma_device_type": 0 00:18:05.114 } 00:18:05.114 ], 00:18:05.114 "driver_specific": { 00:18:05.114 "nvme": [ 00:18:05.114 { 00:18:05.114 "trid": { 00:18:05.114 "trtype": "RDMA", 00:18:05.114 "adrfam": "IPv4", 00:18:05.114 "traddr": "192.168.100.8", 00:18:05.114 "trsvcid": "4420", 00:18:05.114 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:05.114 }, 00:18:05.114 "ctrlr_data": { 00:18:05.114 "cntlid": 1, 00:18:05.114 "vendor_id": "0x8086", 00:18:05.114 "model_number": "SPDK bdev Controller", 00:18:05.114 "serial_number": "SPDK0", 00:18:05.114 "firmware_revision": "24.01.1", 00:18:05.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:05.114 "oacs": { 00:18:05.114 "security": 0, 00:18:05.114 "format": 0, 00:18:05.114 "firmware": 0, 00:18:05.114 "ns_manage": 0 00:18:05.114 }, 00:18:05.114 "multi_ctrlr": true, 00:18:05.114 "ana_reporting": false 00:18:05.114 }, 00:18:05.114 "vs": { 00:18:05.114 "nvme_version": "1.3" 00:18:05.114 }, 00:18:05.114 "ns_data": { 00:18:05.114 "id": 1, 00:18:05.114 "can_share": true 00:18:05.114 } 00:18:05.114 } 00:18:05.114 ], 00:18:05.114 "mp_policy": "active_passive" 00:18:05.114 } 00:18:05.114 } 00:18:05.114 ] 00:18:05.114 17:27:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2696064 00:18:05.114 17:27:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:05.114 17:27:24 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.114 Running I/O for 10 seconds... 00:18:06.492 Latency(us) 00:18:06.492 [2024-11-09T16:27:26.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.492 [2024-11-09T16:27:26.262Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.492 Nvme0n1 : 1.00 36413.00 142.24 0.00 0.00 0.00 0.00 0.00 00:18:06.492 [2024-11-09T16:27:26.262Z] =================================================================================================================== 00:18:06.492 [2024-11-09T16:27:26.262Z] Total : 36413.00 142.24 0.00 0.00 0.00 0.00 0.00 00:18:06.492 00:18:07.060 17:27:26 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:07.319 [2024-11-09T16:27:27.089Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.319 Nvme0n1 : 2.00 36833.50 143.88 0.00 0.00 0.00 0.00 0.00 00:18:07.319 [2024-11-09T16:27:27.089Z] =================================================================================================================== 00:18:07.319 [2024-11-09T16:27:27.089Z] Total : 36833.50 143.88 0.00 0.00 0.00 0.00 0.00 00:18:07.319 00:18:07.319 true 00:18:07.319 17:27:26 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:07.319 17:27:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:07.578 17:27:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:07.578 17:27:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:07.578 17:27:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 2696064 00:18:08.146 [2024-11-09T16:27:27.916Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.146 Nvme0n1 : 3.00 36908.00 144.17 0.00 0.00 0.00 0.00 0.00 00:18:08.146 [2024-11-09T16:27:27.916Z] =================================================================================================================== 00:18:08.146 [2024-11-09T16:27:27.916Z] Total : 36908.00 144.17 0.00 0.00 0.00 0.00 0.00 00:18:08.146 00:18:09.523 [2024-11-09T16:27:29.293Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.523 Nvme0n1 : 4.00 37017.00 144.60 0.00 0.00 0.00 0.00 0.00 00:18:09.523 [2024-11-09T16:27:29.293Z] =================================================================================================================== 00:18:09.523 [2024-11-09T16:27:29.293Z] Total : 37017.00 144.60 0.00 0.00 0.00 0.00 0.00 00:18:09.523 00:18:10.459 [2024-11-09T16:27:30.229Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.459 Nvme0n1 : 5.00 37133.20 145.05 0.00 0.00 0.00 0.00 0.00 00:18:10.459 [2024-11-09T16:27:30.229Z] =================================================================================================================== 00:18:10.459 [2024-11-09T16:27:30.229Z] Total : 37133.20 145.05 0.00 0.00 0.00 0.00 0.00 00:18:10.459 00:18:11.396 [2024-11-09T16:27:31.166Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.396 Nvme0n1 : 6.00 37056.50 144.75 0.00 0.00 0.00 0.00 0.00 00:18:11.396 [2024-11-09T16:27:31.166Z] =================================================================================================================== 00:18:11.396 [2024-11-09T16:27:31.166Z] Total : 37056.50 144.75 0.00 0.00 0.00 0.00 0.00 00:18:11.396 00:18:12.333 [2024-11-09T16:27:32.103Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.333 Nvme0n1 : 7.00 37142.43 145.09 0.00 0.00 0.00 0.00 0.00 00:18:12.333 [2024-11-09T16:27:32.103Z] =================================================================================================================== 00:18:12.333 [2024-11-09T16:27:32.103Z] Total : 37142.43 145.09 0.00 0.00 0.00 0.00 0.00 00:18:12.333 00:18:13.270 [2024-11-09T16:27:33.040Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.270 Nvme0n1 : 8.00 37211.50 145.36 0.00 0.00 0.00 0.00 0.00 00:18:13.270 [2024-11-09T16:27:33.040Z] =================================================================================================================== 00:18:13.270 [2024-11-09T16:27:33.040Z] Total : 37211.50 145.36 0.00 0.00 0.00 0.00 0.00 00:18:13.270 00:18:14.207 [2024-11-09T16:27:33.977Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.207 Nvme0n1 : 9.00 37261.78 145.55 0.00 0.00 0.00 0.00 0.00 00:18:14.207 [2024-11-09T16:27:33.977Z] =================================================================================================================== 00:18:14.207 [2024-11-09T16:27:33.977Z] Total : 37261.78 145.55 0.00 0.00 0.00 0.00 0.00 00:18:14.207 00:18:15.143 [2024-11-09T16:27:34.913Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.143 Nvme0n1 : 10.00 37305.20 145.72 0.00 0.00 0.00 0.00 0.00 00:18:15.143 [2024-11-09T16:27:34.913Z] =================================================================================================================== 00:18:15.143 [2024-11-09T16:27:34.913Z] Total : 37305.20 145.72 0.00 0.00 0.00 0.00 0.00 00:18:15.143 00:18:15.143 00:18:15.143 Latency(us) 00:18:15.143 [2024-11-09T16:27:34.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.143 [2024-11-09T16:27:34.913Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.143 Nvme0n1 : 10.00 37305.10 145.72 0.00 0.00 3428.46 2188.90 13736.35 00:18:15.143 [2024-11-09T16:27:34.913Z] =================================================================================================================== 00:18:15.143 [2024-11-09T16:27:34.913Z] Total : 37305.10 145.72 0.00 0.00 3428.46 2188.90 13736.35 00:18:15.143 0 00:18:15.402 17:27:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2695798 00:18:15.402 17:27:34 -- common/autotest_common.sh@936 -- # '[' -z 2695798 ']' 00:18:15.402 17:27:34 -- common/autotest_common.sh@940 -- # kill -0 2695798 00:18:15.402 17:27:34 -- common/autotest_common.sh@941 -- # uname 00:18:15.402 17:27:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.402 17:27:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2695798 00:18:15.402 17:27:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:15.402 17:27:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:15.402 17:27:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2695798' 00:18:15.402 killing process with pid 2695798 00:18:15.402 17:27:34 -- common/autotest_common.sh@955 -- # kill 2695798 00:18:15.402 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.402 00:18:15.402 Latency(us) 00:18:15.402 [2024-11-09T16:27:35.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.402 [2024-11-09T16:27:35.172Z] =================================================================================================================== 00:18:15.402 [2024-11-09T16:27:35.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.402 17:27:34 -- common/autotest_common.sh@960 -- # wait 2695798 00:18:15.661 17:27:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:15.661 17:27:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:15.661 17:27:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2692275 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@74 -- # wait 2692275 00:18:15.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2692275 Killed "${NVMF_APP[@]}" "$@" 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:15.921 17:27:35 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:15.921 17:27:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:15.921 17:27:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.921 17:27:35 -- common/autotest_common.sh@10 -- # set +x 00:18:15.921 17:27:35 -- nvmf/common.sh@469 -- # nvmfpid=2697964 00:18:15.921 17:27:35 -- nvmf/common.sh@470 -- # waitforlisten 2697964 00:18:15.921 17:27:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:15.921 17:27:35 -- common/autotest_common.sh@829 -- # '[' -z 2697964 ']' 00:18:15.921 17:27:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.921 17:27:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.921 17:27:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.921 17:27:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.921 17:27:35 -- common/autotest_common.sh@10 -- # set +x 00:18:16.181 [2024-11-09 17:27:35.692377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:16.181 [2024-11-09 17:27:35.692428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.181 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.181 [2024-11-09 17:27:35.762888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.181 [2024-11-09 17:27:35.834974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.181 [2024-11-09 17:27:35.835075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.181 [2024-11-09 17:27:35.835085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.181 [2024-11-09 17:27:35.835094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.181 [2024-11-09 17:27:35.835119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.749 17:27:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.749 17:27:36 -- common/autotest_common.sh@862 -- # return 0 00:18:16.749 17:27:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:16.749 17:27:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.749 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:18:17.008 17:27:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.008 17:27:36 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.008 [2024-11-09 17:27:36.712176] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:17.008 [2024-11-09 17:27:36.712278] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:17.008 [2024-11-09 17:27:36.712308] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:17.008 17:27:36 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:17.008 17:27:36 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:17.008 17:27:36 -- common/autotest_common.sh@897 -- # local bdev_name=5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:17.008 17:27:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:17.008 17:27:36 -- common/autotest_common.sh@899 -- # local i 00:18:17.008 17:27:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:17.008 17:27:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:17.008 17:27:36 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:17.267 17:27:36 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 -t 2000 00:18:17.526 [ 00:18:17.526 { 00:18:17.526 "name": "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9", 00:18:17.526 "aliases": [ 00:18:17.526 "lvs/lvol" 00:18:17.526 ], 00:18:17.526 "product_name": "Logical Volume", 00:18:17.526 "block_size": 4096, 00:18:17.526 "num_blocks": 38912, 00:18:17.526 "uuid": "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9", 00:18:17.526 "assigned_rate_limits": { 00:18:17.526 "rw_ios_per_sec": 0, 00:18:17.526 "rw_mbytes_per_sec": 0, 00:18:17.526 "r_mbytes_per_sec": 0, 00:18:17.526 "w_mbytes_per_sec": 0 00:18:17.526 }, 00:18:17.526 "claimed": false, 00:18:17.526 "zoned": false, 00:18:17.526 "supported_io_types": { 00:18:17.526 "read": true, 00:18:17.526 "write": true, 00:18:17.526 "unmap": true, 00:18:17.526 "write_zeroes": true, 00:18:17.526 "flush": false, 00:18:17.526 "reset": true, 00:18:17.526 "compare": false, 00:18:17.526 "compare_and_write": false, 00:18:17.526 "abort": false, 00:18:17.526 "nvme_admin": false, 00:18:17.526 "nvme_io": false 00:18:17.526 }, 00:18:17.526 "driver_specific": { 00:18:17.526 "lvol": { 00:18:17.526 "lvol_store_uuid": "1f959b17-2866-4f6f-b366-8660385071e5", 00:18:17.526 "base_bdev": "aio_bdev", 00:18:17.526 "thin_provision": false, 00:18:17.526 "snapshot": false, 00:18:17.526 "clone": false, 00:18:17.526 "esnap_clone": false 00:18:17.526 } 00:18:17.526 } 00:18:17.526 } 00:18:17.526 ] 00:18:17.526 17:27:37 -- common/autotest_common.sh@905 -- # return 0 00:18:17.526 17:27:37 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:17.526 17:27:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:17.526 17:27:37 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:17.526 17:27:37 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:17.526 17:27:37 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:17.785 17:27:37 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:17.785 17:27:37 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:18.044 [2024-11-09 17:27:37.588552] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:18.044 17:27:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:18.044 17:27:37 -- common/autotest_common.sh@650 -- # local es=0 00:18:18.044 17:27:37 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:18.044 17:27:37 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:18.044 17:27:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.044 17:27:37 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:18.044 17:27:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.045 17:27:37 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:18.045 17:27:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.045 17:27:37 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:18.045 17:27:37 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:18.045 17:27:37 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:18.045 request: 00:18:18.045 { 00:18:18.045 "uuid": "1f959b17-2866-4f6f-b366-8660385071e5", 00:18:18.045 "method": "bdev_lvol_get_lvstores", 00:18:18.045 "req_id": 1 00:18:18.045 } 00:18:18.045 Got JSON-RPC error response 00:18:18.045 response: 00:18:18.045 { 00:18:18.045 "code": -19, 00:18:18.045 "message": "No such device" 00:18:18.045 } 00:18:18.045 17:27:37 -- common/autotest_common.sh@653 -- # es=1 00:18:18.045 17:27:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.045 17:27:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.045 17:27:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.304 17:27:37 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:18.304 aio_bdev 00:18:18.304 17:27:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:18.304 17:27:37 -- common/autotest_common.sh@897 -- # local bdev_name=5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:18.304 17:27:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:18.304 17:27:37 -- common/autotest_common.sh@899 -- # local i 00:18:18.304 17:27:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:18.304 17:27:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:18.304 17:27:37 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:18.563 17:27:38 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 -t 2000 00:18:18.563 [ 00:18:18.563 { 00:18:18.563 "name": "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9", 00:18:18.563 "aliases": [ 00:18:18.563 "lvs/lvol" 00:18:18.563 ], 00:18:18.563 "product_name": "Logical Volume", 00:18:18.563 "block_size": 4096, 00:18:18.563 "num_blocks": 38912, 00:18:18.563 "uuid": "5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9", 00:18:18.563 "assigned_rate_limits": { 00:18:18.563 "rw_ios_per_sec": 0, 00:18:18.563 "rw_mbytes_per_sec": 0, 00:18:18.563 "r_mbytes_per_sec": 0, 00:18:18.563 "w_mbytes_per_sec": 0 00:18:18.563 }, 00:18:18.563 "claimed": false, 00:18:18.563 "zoned": false, 00:18:18.563 "supported_io_types": { 00:18:18.563 "read": true, 00:18:18.563 "write": true, 00:18:18.563 "unmap": true, 00:18:18.563 "write_zeroes": true, 00:18:18.563 "flush": false, 00:18:18.563 "reset": true, 00:18:18.563 "compare": false, 00:18:18.563 "compare_and_write": false, 00:18:18.563 "abort": false, 00:18:18.563 "nvme_admin": false, 00:18:18.563 "nvme_io": false 00:18:18.563 }, 00:18:18.563 "driver_specific": { 00:18:18.563 "lvol": { 00:18:18.563 "lvol_store_uuid": "1f959b17-2866-4f6f-b366-8660385071e5", 00:18:18.563 "base_bdev": "aio_bdev", 00:18:18.563 "thin_provision": false, 00:18:18.563 "snapshot": false, 00:18:18.563 "clone": false, 00:18:18.563 "esnap_clone": false 00:18:18.563 } 00:18:18.563 } 00:18:18.563 } 00:18:18.563 ] 00:18:18.563 17:27:38 -- common/autotest_common.sh@905 -- # return 0 00:18:18.563 17:27:38 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:18.563 17:27:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:18.822 17:27:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:18.822 17:27:38 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:18.822 17:27:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:19.080 17:27:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:19.081 17:27:38 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c9b0af4-4855-42c0-9eda-76dc9a5c6ee9 00:18:19.340 17:27:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f959b17-2866-4f6f-b366-8660385071e5 00:18:19.340 17:27:39 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.599 17:27:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.599 00:18:19.599 real 0m17.555s 00:18:19.599 user 0m45.417s 00:18:19.599 sys 0m3.261s 00:18:19.599 17:27:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.599 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:18:19.599 ************************************ 00:18:19.599 END TEST lvs_grow_dirty 00:18:19.599 ************************************ 00:18:19.599 17:27:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:19.599 17:27:39 -- common/autotest_common.sh@806 -- # type=--id 00:18:19.599 17:27:39 -- common/autotest_common.sh@807 -- # id=0 00:18:19.599 17:27:39 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:19.599 17:27:39 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.599 17:27:39 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:19.599 17:27:39 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:19.599 17:27:39 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:19.599 17:27:39 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.599 nvmf_trace.0 00:18:19.599 17:27:39 -- common/autotest_common.sh@821 -- # return 0 00:18:19.599 17:27:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:19.599 17:27:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.599 17:27:39 -- nvmf/common.sh@116 -- # sync 00:18:19.599 17:27:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:19.599 17:27:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:19.599 17:27:39 -- nvmf/common.sh@119 -- # set +e 00:18:19.599 17:27:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:19.599 17:27:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:19.599 rmmod nvme_rdma 00:18:19.858 rmmod nvme_fabrics 00:18:19.858 17:27:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.858 17:27:39 -- nvmf/common.sh@123 -- # set -e 00:18:19.858 17:27:39 -- nvmf/common.sh@124 -- # return 0 00:18:19.858 17:27:39 -- nvmf/common.sh@477 -- # '[' -n 2697964 ']' 00:18:19.858 17:27:39 -- nvmf/common.sh@478 -- # killprocess 2697964 00:18:19.858 17:27:39 -- common/autotest_common.sh@936 -- # '[' -z 2697964 ']' 00:18:19.858 17:27:39 -- common/autotest_common.sh@940 -- # kill -0 2697964 00:18:19.858 17:27:39 -- common/autotest_common.sh@941 -- # uname 00:18:19.858 17:27:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.858 17:27:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2697964 00:18:19.858 17:27:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.858 17:27:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.858 17:27:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2697964' 00:18:19.858 killing process with pid 2697964 00:18:19.858 17:27:39 -- common/autotest_common.sh@955 -- # kill 2697964 00:18:19.858 17:27:39 -- common/autotest_common.sh@960 -- # wait 2697964 00:18:20.117 17:27:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.117 17:27:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:20.117 00:18:20.117 real 0m41.981s 00:18:20.117 user 1m7.444s 00:18:20.117 sys 0m10.175s 00:18:20.117 17:27:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:20.117 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 ************************************ 00:18:20.117 END TEST nvmf_lvs_grow 00:18:20.117 ************************************ 00:18:20.117 17:27:39 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:20.117 17:27:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.117 17:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.117 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 ************************************ 00:18:20.117 START TEST nvmf_bdev_io_wait 00:18:20.117 ************************************ 00:18:20.117 17:27:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:20.117 * Looking for test storage... 00:18:20.117 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:20.117 17:27:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:20.117 17:27:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:20.117 17:27:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:20.117 17:27:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:20.117 17:27:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:20.117 17:27:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:20.117 17:27:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:20.117 17:27:39 -- scripts/common.sh@335 -- # IFS=.-: 00:18:20.117 17:27:39 -- scripts/common.sh@335 -- # read -ra ver1 00:18:20.117 17:27:39 -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.117 17:27:39 -- scripts/common.sh@336 -- # read -ra ver2 00:18:20.117 17:27:39 -- scripts/common.sh@337 -- # local 'op=<' 00:18:20.117 17:27:39 -- scripts/common.sh@339 -- # ver1_l=2 00:18:20.117 17:27:39 -- scripts/common.sh@340 -- # ver2_l=1 00:18:20.117 17:27:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:20.117 17:27:39 -- scripts/common.sh@343 -- # case "$op" in 00:18:20.117 17:27:39 -- scripts/common.sh@344 -- # : 1 00:18:20.117 17:27:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:20.117 17:27:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.117 17:27:39 -- scripts/common.sh@364 -- # decimal 1 00:18:20.117 17:27:39 -- scripts/common.sh@352 -- # local d=1 00:18:20.117 17:27:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.117 17:27:39 -- scripts/common.sh@354 -- # echo 1 00:18:20.375 17:27:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:20.375 17:27:39 -- scripts/common.sh@365 -- # decimal 2 00:18:20.375 17:27:39 -- scripts/common.sh@352 -- # local d=2 00:18:20.375 17:27:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.375 17:27:39 -- scripts/common.sh@354 -- # echo 2 00:18:20.375 17:27:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:20.375 17:27:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:20.375 17:27:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:20.375 17:27:39 -- scripts/common.sh@367 -- # return 0 00:18:20.375 17:27:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.375 17:27:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 17:27:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 17:27:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 17:27:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 17:27:39 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.375 17:27:39 -- nvmf/common.sh@7 -- # uname -s 00:18:20.375 17:27:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.375 17:27:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.375 17:27:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.375 17:27:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.375 17:27:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.376 17:27:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.376 17:27:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.376 17:27:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.376 17:27:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.376 17:27:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.376 17:27:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:20.376 17:27:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:20.376 17:27:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.376 17:27:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.376 17:27:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.376 17:27:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:20.376 17:27:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.376 17:27:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.376 17:27:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.376 17:27:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.376 17:27:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.376 17:27:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.376 17:27:39 -- paths/export.sh@5 -- # export PATH 00:18:20.376 17:27:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.376 17:27:39 -- nvmf/common.sh@46 -- # : 0 00:18:20.376 17:27:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:20.376 17:27:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:20.376 17:27:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:20.376 17:27:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.376 17:27:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.376 17:27:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:20.376 17:27:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:20.376 17:27:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:20.376 17:27:39 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.376 17:27:39 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.376 17:27:39 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:20.376 17:27:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:20.376 17:27:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.376 17:27:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:20.376 17:27:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:20.376 17:27:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:20.376 17:27:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.376 17:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.376 17:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.376 17:27:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:20.376 17:27:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:20.376 17:27:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:20.376 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:18:26.946 17:27:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.946 17:27:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.946 17:27:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.946 17:27:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.946 17:27:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.946 17:27:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.946 17:27:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.946 17:27:46 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.946 17:27:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.946 17:27:46 -- nvmf/common.sh@295 -- # e810=() 00:18:26.946 17:27:46 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.946 17:27:46 -- nvmf/common.sh@296 -- # x722=() 00:18:26.946 17:27:46 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.946 17:27:46 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.946 17:27:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.946 17:27:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.946 17:27:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.946 17:27:46 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:26.946 17:27:46 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:26.946 17:27:46 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:26.946 17:27:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.946 17:27:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.946 17:27:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:26.946 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:26.946 17:27:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:26.946 17:27:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:26.947 17:27:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:26.947 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:26.947 17:27:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:26.947 17:27:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.947 17:27:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.947 17:27:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:26.947 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.947 17:27:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.947 17:27:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.947 17:27:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:26.947 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.947 17:27:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.947 17:27:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:26.947 17:27:46 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:26.947 17:27:46 -- nvmf/common.sh@57 -- # uname 00:18:26.947 17:27:46 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:26.947 17:27:46 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:26.947 17:27:46 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:26.947 17:27:46 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:26.947 17:27:46 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:26.947 17:27:46 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:26.947 17:27:46 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:26.947 17:27:46 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:26.947 17:27:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:26.947 17:27:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:26.947 17:27:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:26.947 17:27:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:26.947 17:27:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:26.947 17:27:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:26.947 17:27:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:26.947 17:27:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@104 -- # continue 2 00:18:26.947 17:27:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@104 -- # continue 2 00:18:26.947 17:27:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:26.947 17:27:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:26.947 17:27:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:26.947 17:27:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:26.947 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:26.947 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:26.947 altname enp217s0f0np0 00:18:26.947 altname ens818f0np0 00:18:26.947 inet 192.168.100.8/24 scope global mlx_0_0 00:18:26.947 valid_lft forever preferred_lft forever 00:18:26.947 17:27:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:26.947 17:27:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:26.947 17:27:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:26.947 17:27:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:26.947 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:26.947 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:26.947 altname enp217s0f1np1 00:18:26.947 altname ens818f1np1 00:18:26.947 inet 192.168.100.9/24 scope global mlx_0_1 00:18:26.947 valid_lft forever preferred_lft forever 00:18:26.947 17:27:46 -- nvmf/common.sh@410 -- # return 0 00:18:26.947 17:27:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.947 17:27:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:26.947 17:27:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:26.947 17:27:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:26.947 17:27:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:26.947 17:27:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:26.947 17:27:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:26.947 17:27:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:26.947 17:27:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:26.947 17:27:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@104 -- # continue 2 00:18:26.947 17:27:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.947 17:27:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:26.947 17:27:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@104 -- # continue 2 00:18:26.947 17:27:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:26.947 17:27:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:26.947 17:27:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:26.947 17:27:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:26.947 17:27:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:26.947 17:27:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:26.947 192.168.100.9' 00:18:26.947 17:27:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:26.947 192.168.100.9' 00:18:26.947 17:27:46 -- nvmf/common.sh@445 -- # head -n 1 00:18:26.947 17:27:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:26.947 17:27:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:26.947 192.168.100.9' 00:18:26.947 17:27:46 -- nvmf/common.sh@446 -- # tail -n +2 00:18:26.947 17:27:46 -- nvmf/common.sh@446 -- # head -n 1 00:18:26.947 17:27:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:26.947 17:27:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:26.947 17:27:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:26.947 17:27:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:26.947 17:27:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:26.947 17:27:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:26.947 17:27:46 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:26.947 17:27:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.947 17:27:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.947 17:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:26.947 17:27:46 -- nvmf/common.sh@469 -- # nvmfpid=2702008 00:18:26.947 17:27:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:26.947 17:27:46 -- nvmf/common.sh@470 -- # waitforlisten 2702008 00:18:26.947 17:27:46 -- common/autotest_common.sh@829 -- # '[' -z 2702008 ']' 00:18:26.947 17:27:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.947 17:27:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.947 17:27:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.947 17:27:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.947 17:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:27.207 [2024-11-09 17:27:46.745652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:27.207 [2024-11-09 17:27:46.745705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.207 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.207 [2024-11-09 17:27:46.814663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.207 [2024-11-09 17:27:46.888940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.207 [2024-11-09 17:27:46.889055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.207 [2024-11-09 17:27:46.889064] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.207 [2024-11-09 17:27:46.889073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.207 [2024-11-09 17:27:46.891474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.207 [2024-11-09 17:27:46.891590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.207 [2024-11-09 17:27:46.891492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.207 [2024-11-09 17:27:46.891588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.220 17:27:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.220 17:27:47 -- common/autotest_common.sh@862 -- # return 0 00:18:28.220 17:27:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.220 17:27:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 17:27:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 [2024-11-09 17:27:47.690870] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cac0c0/0x1cb05b0) succeed. 00:18:28.220 [2024-11-09 17:27:47.699777] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cad6b0/0x1cf1c50) succeed. 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 Malloc0 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.220 17:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.220 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 [2024-11-09 17:27:47.875448] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.220 17:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2702148 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@30 -- # READ_PID=2702151 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # config=() 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.220 17:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.220 { 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme$subsystem", 00:18:28.220 "trtype": "$TEST_TRANSPORT", 00:18:28.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "$NVMF_PORT", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.220 "hdgst": ${hdgst:-false}, 00:18:28.220 "ddgst": ${ddgst:-false} 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 } 00:18:28.220 EOF 00:18:28.220 )") 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2702154 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # config=() 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.220 17:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.220 { 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme$subsystem", 00:18:28.220 "trtype": "$TEST_TRANSPORT", 00:18:28.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "$NVMF_PORT", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.220 "hdgst": ${hdgst:-false}, 00:18:28.220 "ddgst": ${ddgst:-false} 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 } 00:18:28.220 EOF 00:18:28.220 )") 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2702158 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # cat 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@35 -- # sync 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # config=() 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.220 17:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.220 { 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme$subsystem", 00:18:28.220 "trtype": "$TEST_TRANSPORT", 00:18:28.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "$NVMF_PORT", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.220 "hdgst": ${hdgst:-false}, 00:18:28.220 "ddgst": ${ddgst:-false} 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 } 00:18:28.220 EOF 00:18:28.220 )") 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # config=() 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # cat 00:18:28.220 17:27:47 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.220 17:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.220 { 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme$subsystem", 00:18:28.220 "trtype": "$TEST_TRANSPORT", 00:18:28.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "$NVMF_PORT", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.220 "hdgst": ${hdgst:-false}, 00:18:28.220 "ddgst": ${ddgst:-false} 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 } 00:18:28.220 EOF 00:18:28.220 )") 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # cat 00:18:28.220 17:27:47 -- target/bdev_io_wait.sh@37 -- # wait 2702148 00:18:28.220 17:27:47 -- nvmf/common.sh@542 -- # cat 00:18:28.220 17:27:47 -- nvmf/common.sh@544 -- # jq . 00:18:28.220 17:27:47 -- nvmf/common.sh@544 -- # jq . 00:18:28.220 17:27:47 -- nvmf/common.sh@544 -- # jq . 00:18:28.220 17:27:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.220 17:27:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme1", 00:18:28.220 "trtype": "rdma", 00:18:28.220 "traddr": "192.168.100.8", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "4420", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.220 "hdgst": false, 00:18:28.220 "ddgst": false 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 }' 00:18:28.220 17:27:47 -- nvmf/common.sh@544 -- # jq . 00:18:28.220 17:27:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.220 17:27:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.220 "params": { 00:18:28.220 "name": "Nvme1", 00:18:28.220 "trtype": "rdma", 00:18:28.220 "traddr": "192.168.100.8", 00:18:28.220 "adrfam": "ipv4", 00:18:28.220 "trsvcid": "4420", 00:18:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.220 "hdgst": false, 00:18:28.220 "ddgst": false 00:18:28.220 }, 00:18:28.220 "method": "bdev_nvme_attach_controller" 00:18:28.220 }' 00:18:28.221 17:27:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.221 17:27:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.221 "params": { 00:18:28.221 "name": "Nvme1", 00:18:28.221 "trtype": "rdma", 00:18:28.221 "traddr": "192.168.100.8", 00:18:28.221 "adrfam": "ipv4", 00:18:28.221 "trsvcid": "4420", 00:18:28.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.221 "hdgst": false, 00:18:28.221 "ddgst": false 00:18:28.221 }, 00:18:28.221 "method": "bdev_nvme_attach_controller" 00:18:28.221 }' 00:18:28.221 17:27:47 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.221 17:27:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.221 "params": { 00:18:28.221 "name": "Nvme1", 00:18:28.221 "trtype": "rdma", 00:18:28.221 "traddr": "192.168.100.8", 00:18:28.221 "adrfam": "ipv4", 00:18:28.221 "trsvcid": "4420", 00:18:28.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.221 "hdgst": false, 00:18:28.221 "ddgst": false 00:18:28.221 }, 00:18:28.221 "method": "bdev_nvme_attach_controller" 00:18:28.221 }' 00:18:28.221 [2024-11-09 17:27:47.923312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.221 [2024-11-09 17:27:47.923367] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:28.221 [2024-11-09 17:27:47.927033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.221 [2024-11-09 17:27:47.927079] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:28.221 [2024-11-09 17:27:47.929034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.221 [2024-11-09 17:27:47.929074] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:28.221 [2024-11-09 17:27:47.932270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.221 [2024-11-09 17:27:47.932317] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.492 [2024-11-09 17:27:48.113001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.492 [2024-11-09 17:27:48.176576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.492 [2024-11-09 17:27:48.190100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.492 [2024-11-09 17:27:48.241838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.492 [2024-11-09 17:27:48.248948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:28.751 [2024-11-09 17:27:48.309217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:28.751 [2024-11-09 17:27:48.335349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.751 [2024-11-09 17:27:48.426460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:28.751 Running I/O for 1 seconds... 00:18:28.751 Running I/O for 1 seconds... 00:18:28.751 Running I/O for 1 seconds... 00:18:29.010 Running I/O for 1 seconds... 00:18:29.946 00:18:29.946 Latency(us) 00:18:29.946 [2024-11-09T16:27:49.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.946 [2024-11-09T16:27:49.716Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:29.946 Nvme1n1 : 1.01 16955.05 66.23 0.00 0.00 7526.09 4010.80 14470.35 00:18:29.946 [2024-11-09T16:27:49.716Z] =================================================================================================================== 00:18:29.946 [2024-11-09T16:27:49.716Z] Total : 16955.05 66.23 0.00 0.00 7526.09 4010.80 14470.35 00:18:29.946 00:18:29.946 Latency(us) 00:18:29.946 [2024-11-09T16:27:49.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.946 [2024-11-09T16:27:49.716Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:29.946 Nvme1n1 : 1.00 266127.06 1039.56 0.00 0.00 479.55 190.05 1690.83 00:18:29.946 [2024-11-09T16:27:49.716Z] =================================================================================================================== 00:18:29.946 [2024-11-09T16:27:49.716Z] Total : 266127.06 1039.56 0.00 0.00 479.55 190.05 1690.83 00:18:29.946 00:18:29.946 Latency(us) 00:18:29.946 [2024-11-09T16:27:49.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.947 [2024-11-09T16:27:49.717Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:29.947 Nvme1n1 : 1.01 15710.24 61.37 0.00 0.00 8123.51 5006.95 16252.93 00:18:29.947 [2024-11-09T16:27:49.717Z] =================================================================================================================== 00:18:29.947 [2024-11-09T16:27:49.717Z] Total : 15710.24 61.37 0.00 0.00 8123.51 5006.95 16252.93 00:18:29.947 00:18:29.947 Latency(us) 00:18:29.947 [2024-11-09T16:27:49.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.947 [2024-11-09T16:27:49.717Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:29.947 Nvme1n1 : 1.00 18498.77 72.26 0.00 0.00 6904.32 3289.91 18559.80 00:18:29.947 [2024-11-09T16:27:49.717Z] =================================================================================================================== 00:18:29.947 [2024-11-09T16:27:49.717Z] Total : 18498.77 72.26 0.00 0.00 6904.32 3289.91 18559.80 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@38 -- # wait 2702151 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@39 -- # wait 2702154 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@40 -- # wait 2702158 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.205 17:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.205 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:18:30.205 17:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:30.205 17:27:49 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:30.205 17:27:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.205 17:27:49 -- nvmf/common.sh@116 -- # sync 00:18:30.205 17:27:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:30.205 17:27:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:30.205 17:27:49 -- nvmf/common.sh@119 -- # set +e 00:18:30.205 17:27:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.205 17:27:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:30.205 rmmod nvme_rdma 00:18:30.205 rmmod nvme_fabrics 00:18:30.205 17:27:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.205 17:27:49 -- nvmf/common.sh@123 -- # set -e 00:18:30.205 17:27:49 -- nvmf/common.sh@124 -- # return 0 00:18:30.205 17:27:49 -- nvmf/common.sh@477 -- # '[' -n 2702008 ']' 00:18:30.205 17:27:49 -- nvmf/common.sh@478 -- # killprocess 2702008 00:18:30.205 17:27:49 -- common/autotest_common.sh@936 -- # '[' -z 2702008 ']' 00:18:30.205 17:27:49 -- common/autotest_common.sh@940 -- # kill -0 2702008 00:18:30.205 17:27:49 -- common/autotest_common.sh@941 -- # uname 00:18:30.205 17:27:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.205 17:27:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2702008 00:18:30.463 17:27:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:30.463 17:27:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:30.463 17:27:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2702008' 00:18:30.463 killing process with pid 2702008 00:18:30.463 17:27:50 -- common/autotest_common.sh@955 -- # kill 2702008 00:18:30.463 17:27:50 -- common/autotest_common.sh@960 -- # wait 2702008 00:18:30.721 17:27:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.721 17:27:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:30.721 00:18:30.721 real 0m10.572s 00:18:30.721 user 0m21.204s 00:18:30.721 sys 0m6.504s 00:18:30.721 17:27:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:30.721 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:18:30.721 ************************************ 00:18:30.721 END TEST nvmf_bdev_io_wait 00:18:30.721 ************************************ 00:18:30.721 17:27:50 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:30.721 17:27:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:30.721 17:27:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:30.721 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:18:30.721 ************************************ 00:18:30.721 START TEST nvmf_queue_depth 00:18:30.721 ************************************ 00:18:30.721 17:27:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:30.721 * Looking for test storage... 00:18:30.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:30.721 17:27:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:30.721 17:27:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:30.721 17:27:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:30.980 17:27:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:30.980 17:27:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:30.980 17:27:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:30.980 17:27:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:30.980 17:27:50 -- scripts/common.sh@335 -- # IFS=.-: 00:18:30.980 17:27:50 -- scripts/common.sh@335 -- # read -ra ver1 00:18:30.980 17:27:50 -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.980 17:27:50 -- scripts/common.sh@336 -- # read -ra ver2 00:18:30.980 17:27:50 -- scripts/common.sh@337 -- # local 'op=<' 00:18:30.980 17:27:50 -- scripts/common.sh@339 -- # ver1_l=2 00:18:30.980 17:27:50 -- scripts/common.sh@340 -- # ver2_l=1 00:18:30.980 17:27:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:30.980 17:27:50 -- scripts/common.sh@343 -- # case "$op" in 00:18:30.980 17:27:50 -- scripts/common.sh@344 -- # : 1 00:18:30.980 17:27:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:30.980 17:27:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.980 17:27:50 -- scripts/common.sh@364 -- # decimal 1 00:18:30.980 17:27:50 -- scripts/common.sh@352 -- # local d=1 00:18:30.980 17:27:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.980 17:27:50 -- scripts/common.sh@354 -- # echo 1 00:18:30.980 17:27:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:30.980 17:27:50 -- scripts/common.sh@365 -- # decimal 2 00:18:30.980 17:27:50 -- scripts/common.sh@352 -- # local d=2 00:18:30.980 17:27:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.980 17:27:50 -- scripts/common.sh@354 -- # echo 2 00:18:30.980 17:27:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:30.980 17:27:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:30.980 17:27:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:30.980 17:27:50 -- scripts/common.sh@367 -- # return 0 00:18:30.980 17:27:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.980 17:27:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.980 --rc genhtml_branch_coverage=1 00:18:30.980 --rc genhtml_function_coverage=1 00:18:30.980 --rc genhtml_legend=1 00:18:30.980 --rc geninfo_all_blocks=1 00:18:30.980 --rc geninfo_unexecuted_blocks=1 00:18:30.980 00:18:30.980 ' 00:18:30.980 17:27:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.980 --rc genhtml_branch_coverage=1 00:18:30.980 --rc genhtml_function_coverage=1 00:18:30.980 --rc genhtml_legend=1 00:18:30.980 --rc geninfo_all_blocks=1 00:18:30.980 --rc geninfo_unexecuted_blocks=1 00:18:30.980 00:18:30.980 ' 00:18:30.980 17:27:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.980 --rc genhtml_branch_coverage=1 00:18:30.980 --rc genhtml_function_coverage=1 00:18:30.980 --rc genhtml_legend=1 00:18:30.980 --rc geninfo_all_blocks=1 00:18:30.980 --rc geninfo_unexecuted_blocks=1 00:18:30.980 00:18:30.980 ' 00:18:30.980 17:27:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.980 --rc genhtml_branch_coverage=1 00:18:30.980 --rc genhtml_function_coverage=1 00:18:30.980 --rc genhtml_legend=1 00:18:30.980 --rc geninfo_all_blocks=1 00:18:30.980 --rc geninfo_unexecuted_blocks=1 00:18:30.980 00:18:30.980 ' 00:18:30.980 17:27:50 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.980 17:27:50 -- nvmf/common.sh@7 -- # uname -s 00:18:30.980 17:27:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.980 17:27:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.980 17:27:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.980 17:27:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.980 17:27:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.980 17:27:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.980 17:27:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.980 17:27:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.980 17:27:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.980 17:27:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.980 17:27:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:30.980 17:27:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:30.980 17:27:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.980 17:27:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.980 17:27:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.980 17:27:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:30.980 17:27:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.980 17:27:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.980 17:27:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.980 17:27:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.980 17:27:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.980 17:27:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.980 17:27:50 -- paths/export.sh@5 -- # export PATH 00:18:30.980 17:27:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.980 17:27:50 -- nvmf/common.sh@46 -- # : 0 00:18:30.980 17:27:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.980 17:27:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.980 17:27:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.980 17:27:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.980 17:27:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.980 17:27:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.980 17:27:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.980 17:27:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.980 17:27:50 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:30.980 17:27:50 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:30.980 17:27:50 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.981 17:27:50 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:30.981 17:27:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:30.981 17:27:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.981 17:27:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.981 17:27:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.981 17:27:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.981 17:27:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.981 17:27:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.981 17:27:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.981 17:27:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:30.981 17:27:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:30.981 17:27:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:30.981 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 17:27:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:37.547 17:27:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:37.547 17:27:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:37.547 17:27:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:37.547 17:27:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:37.547 17:27:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:37.547 17:27:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:37.547 17:27:57 -- nvmf/common.sh@294 -- # net_devs=() 00:18:37.547 17:27:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:37.547 17:27:57 -- nvmf/common.sh@295 -- # e810=() 00:18:37.547 17:27:57 -- nvmf/common.sh@295 -- # local -ga e810 00:18:37.547 17:27:57 -- nvmf/common.sh@296 -- # x722=() 00:18:37.547 17:27:57 -- nvmf/common.sh@296 -- # local -ga x722 00:18:37.547 17:27:57 -- nvmf/common.sh@297 -- # mlx=() 00:18:37.547 17:27:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:37.547 17:27:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.547 17:27:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.547 17:27:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.547 17:27:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.547 17:27:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.548 17:27:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:37.548 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:37.548 17:27:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.548 17:27:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:37.548 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:37.548 17:27:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.548 17:27:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.548 17:27:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.548 17:27:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:37.548 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.548 17:27:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.548 17:27:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:37.548 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.548 17:27:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:37.548 17:27:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:37.548 17:27:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:37.548 17:27:57 -- nvmf/common.sh@57 -- # uname 00:18:37.548 17:27:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:37.548 17:27:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:37.548 17:27:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:37.548 17:27:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:37.548 17:27:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:37.548 17:27:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:37.548 17:27:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:37.548 17:27:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:37.548 17:27:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:37.548 17:27:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:37.548 17:27:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:37.548 17:27:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.548 17:27:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:37.548 17:27:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:37.548 17:27:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.548 17:27:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.548 17:27:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.548 17:27:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:37.548 17:27:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.548 17:27:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:37.548 17:27:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:37.548 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.548 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:37.548 altname enp217s0f0np0 00:18:37.548 altname ens818f0np0 00:18:37.548 inet 192.168.100.8/24 scope global mlx_0_0 00:18:37.548 valid_lft forever preferred_lft forever 00:18:37.548 17:27:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:37.548 17:27:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.548 17:27:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:37.548 17:27:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:37.548 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.548 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:37.548 altname enp217s0f1np1 00:18:37.548 altname ens818f1np1 00:18:37.548 inet 192.168.100.9/24 scope global mlx_0_1 00:18:37.548 valid_lft forever preferred_lft forever 00:18:37.548 17:27:57 -- nvmf/common.sh@410 -- # return 0 00:18:37.548 17:27:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.548 17:27:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:37.548 17:27:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:37.548 17:27:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:37.548 17:27:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.548 17:27:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:37.548 17:27:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:37.548 17:27:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.548 17:27:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:37.548 17:27:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.548 17:27:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.548 17:27:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.548 17:27:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.548 17:27:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:37.548 17:27:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.548 17:27:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:37.548 17:27:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.548 17:27:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.548 17:27:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:37.548 192.168.100.9' 00:18:37.548 17:27:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:37.548 192.168.100.9' 00:18:37.548 17:27:57 -- nvmf/common.sh@445 -- # head -n 1 00:18:37.548 17:27:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:37.548 17:27:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:37.548 192.168.100.9' 00:18:37.548 17:27:57 -- nvmf/common.sh@446 -- # tail -n +2 00:18:37.548 17:27:57 -- nvmf/common.sh@446 -- # head -n 1 00:18:37.548 17:27:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:37.548 17:27:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:37.549 17:27:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:37.549 17:27:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:37.549 17:27:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:37.549 17:27:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:37.549 17:27:57 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:37.549 17:27:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:37.549 17:27:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.549 17:27:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.549 17:27:57 -- nvmf/common.sh@469 -- # nvmfpid=2705868 00:18:37.549 17:27:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.549 17:27:57 -- nvmf/common.sh@470 -- # waitforlisten 2705868 00:18:37.549 17:27:57 -- common/autotest_common.sh@829 -- # '[' -z 2705868 ']' 00:18:37.549 17:27:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.549 17:27:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.549 17:27:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.549 17:27:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.549 17:27:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.808 [2024-11-09 17:27:57.358751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:37.808 [2024-11-09 17:27:57.358801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.808 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.808 [2024-11-09 17:27:57.427313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.808 [2024-11-09 17:27:57.496154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:37.808 [2024-11-09 17:27:57.496280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.808 [2024-11-09 17:27:57.496290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.808 [2024-11-09 17:27:57.496299] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.808 [2024-11-09 17:27:57.496319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.745 17:27:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.745 17:27:58 -- common/autotest_common.sh@862 -- # return 0 00:18:38.745 17:27:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:38.745 17:27:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 17:27:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.745 17:27:58 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:38.745 17:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 [2024-11-09 17:27:58.235599] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfc1230/0xfc5720) succeed. 00:18:38.745 [2024-11-09 17:27:58.244668] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc2730/0x1006dc0) succeed. 00:18:38.745 17:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.745 17:27:58 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.745 17:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 Malloc0 00:18:38.745 17:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.745 17:27:58 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.745 17:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 17:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.745 17:27:58 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.745 17:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 17:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.745 17:27:58 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:38.745 17:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 [2024-11-09 17:27:58.337347] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:38.745 17:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.745 17:27:58 -- target/queue_depth.sh@30 -- # bdevperf_pid=2706088 00:18:38.745 17:27:58 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:38.745 17:27:58 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:38.745 17:27:58 -- target/queue_depth.sh@33 -- # waitforlisten 2706088 /var/tmp/bdevperf.sock 00:18:38.745 17:27:58 -- common/autotest_common.sh@829 -- # '[' -z 2706088 ']' 00:18:38.745 17:27:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.745 17:27:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.745 17:27:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.745 17:27:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.745 17:27:58 -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 [2024-11-09 17:27:58.386588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:38.745 [2024-11-09 17:27:58.386632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706088 ] 00:18:38.745 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.745 [2024-11-09 17:27:58.456154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.004 [2024-11-09 17:27:58.527978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.570 17:27:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.571 17:27:59 -- common/autotest_common.sh@862 -- # return 0 00:18:39.571 17:27:59 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:39.571 17:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.571 17:27:59 -- common/autotest_common.sh@10 -- # set +x 00:18:39.571 NVMe0n1 00:18:39.571 17:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.571 17:27:59 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.829 Running I/O for 10 seconds... 00:18:49.805 00:18:49.805 Latency(us) 00:18:49.805 [2024-11-09T16:28:09.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.805 [2024-11-09T16:28:09.575Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:49.805 Verification LBA range: start 0x0 length 0x4000 00:18:49.805 NVMe0n1 : 10.03 29448.26 115.03 0.00 0.00 34695.70 7759.46 34812.72 00:18:49.805 [2024-11-09T16:28:09.575Z] =================================================================================================================== 00:18:49.805 [2024-11-09T16:28:09.575Z] Total : 29448.26 115.03 0.00 0.00 34695.70 7759.46 34812.72 00:18:49.805 0 00:18:49.805 17:28:09 -- target/queue_depth.sh@39 -- # killprocess 2706088 00:18:49.805 17:28:09 -- common/autotest_common.sh@936 -- # '[' -z 2706088 ']' 00:18:49.805 17:28:09 -- common/autotest_common.sh@940 -- # kill -0 2706088 00:18:49.805 17:28:09 -- common/autotest_common.sh@941 -- # uname 00:18:49.805 17:28:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.805 17:28:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2706088 00:18:49.805 17:28:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:49.805 17:28:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:49.805 17:28:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2706088' 00:18:49.805 killing process with pid 2706088 00:18:49.805 17:28:09 -- common/autotest_common.sh@955 -- # kill 2706088 00:18:49.805 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.805 00:18:49.805 Latency(us) 00:18:49.805 [2024-11-09T16:28:09.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.805 [2024-11-09T16:28:09.575Z] =================================================================================================================== 00:18:49.805 [2024-11-09T16:28:09.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.805 17:28:09 -- common/autotest_common.sh@960 -- # wait 2706088 00:18:50.068 17:28:09 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:50.068 17:28:09 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:50.068 17:28:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:50.068 17:28:09 -- nvmf/common.sh@116 -- # sync 00:18:50.068 17:28:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:50.068 17:28:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:50.068 17:28:09 -- nvmf/common.sh@119 -- # set +e 00:18:50.068 17:28:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:50.068 17:28:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:50.068 rmmod nvme_rdma 00:18:50.068 rmmod nvme_fabrics 00:18:50.068 17:28:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:50.068 17:28:09 -- nvmf/common.sh@123 -- # set -e 00:18:50.068 17:28:09 -- nvmf/common.sh@124 -- # return 0 00:18:50.068 17:28:09 -- nvmf/common.sh@477 -- # '[' -n 2705868 ']' 00:18:50.068 17:28:09 -- nvmf/common.sh@478 -- # killprocess 2705868 00:18:50.068 17:28:09 -- common/autotest_common.sh@936 -- # '[' -z 2705868 ']' 00:18:50.068 17:28:09 -- common/autotest_common.sh@940 -- # kill -0 2705868 00:18:50.068 17:28:09 -- common/autotest_common.sh@941 -- # uname 00:18:50.068 17:28:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.068 17:28:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2705868 00:18:50.328 17:28:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:50.328 17:28:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:50.328 17:28:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2705868' 00:18:50.328 killing process with pid 2705868 00:18:50.328 17:28:09 -- common/autotest_common.sh@955 -- # kill 2705868 00:18:50.328 17:28:09 -- common/autotest_common.sh@960 -- # wait 2705868 00:18:50.586 17:28:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:50.586 17:28:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:50.586 00:18:50.586 real 0m19.789s 00:18:50.586 user 0m26.415s 00:18:50.586 sys 0m5.890s 00:18:50.586 17:28:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:50.586 17:28:10 -- common/autotest_common.sh@10 -- # set +x 00:18:50.586 ************************************ 00:18:50.586 END TEST nvmf_queue_depth 00:18:50.586 ************************************ 00:18:50.586 17:28:10 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:50.586 17:28:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:50.586 17:28:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.586 17:28:10 -- common/autotest_common.sh@10 -- # set +x 00:18:50.586 ************************************ 00:18:50.586 START TEST nvmf_multipath 00:18:50.586 ************************************ 00:18:50.586 17:28:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:50.586 * Looking for test storage... 00:18:50.586 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:50.586 17:28:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:50.586 17:28:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:50.586 17:28:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:50.586 17:28:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:50.586 17:28:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:50.586 17:28:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:50.586 17:28:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:50.586 17:28:10 -- scripts/common.sh@335 -- # IFS=.-: 00:18:50.586 17:28:10 -- scripts/common.sh@335 -- # read -ra ver1 00:18:50.586 17:28:10 -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.586 17:28:10 -- scripts/common.sh@336 -- # read -ra ver2 00:18:50.586 17:28:10 -- scripts/common.sh@337 -- # local 'op=<' 00:18:50.586 17:28:10 -- scripts/common.sh@339 -- # ver1_l=2 00:18:50.586 17:28:10 -- scripts/common.sh@340 -- # ver2_l=1 00:18:50.845 17:28:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:50.845 17:28:10 -- scripts/common.sh@343 -- # case "$op" in 00:18:50.845 17:28:10 -- scripts/common.sh@344 -- # : 1 00:18:50.845 17:28:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:50.845 17:28:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.845 17:28:10 -- scripts/common.sh@364 -- # decimal 1 00:18:50.845 17:28:10 -- scripts/common.sh@352 -- # local d=1 00:18:50.845 17:28:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.845 17:28:10 -- scripts/common.sh@354 -- # echo 1 00:18:50.845 17:28:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:50.845 17:28:10 -- scripts/common.sh@365 -- # decimal 2 00:18:50.845 17:28:10 -- scripts/common.sh@352 -- # local d=2 00:18:50.845 17:28:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.845 17:28:10 -- scripts/common.sh@354 -- # echo 2 00:18:50.845 17:28:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:50.845 17:28:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:50.845 17:28:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:50.845 17:28:10 -- scripts/common.sh@367 -- # return 0 00:18:50.845 17:28:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.845 17:28:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:50.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.845 --rc genhtml_branch_coverage=1 00:18:50.845 --rc genhtml_function_coverage=1 00:18:50.845 --rc genhtml_legend=1 00:18:50.845 --rc geninfo_all_blocks=1 00:18:50.845 --rc geninfo_unexecuted_blocks=1 00:18:50.845 00:18:50.845 ' 00:18:50.845 17:28:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:50.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.845 --rc genhtml_branch_coverage=1 00:18:50.845 --rc genhtml_function_coverage=1 00:18:50.845 --rc genhtml_legend=1 00:18:50.845 --rc geninfo_all_blocks=1 00:18:50.845 --rc geninfo_unexecuted_blocks=1 00:18:50.845 00:18:50.845 ' 00:18:50.845 17:28:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:50.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.845 --rc genhtml_branch_coverage=1 00:18:50.845 --rc genhtml_function_coverage=1 00:18:50.845 --rc genhtml_legend=1 00:18:50.845 --rc geninfo_all_blocks=1 00:18:50.845 --rc geninfo_unexecuted_blocks=1 00:18:50.845 00:18:50.845 ' 00:18:50.845 17:28:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:50.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.845 --rc genhtml_branch_coverage=1 00:18:50.845 --rc genhtml_function_coverage=1 00:18:50.845 --rc genhtml_legend=1 00:18:50.845 --rc geninfo_all_blocks=1 00:18:50.845 --rc geninfo_unexecuted_blocks=1 00:18:50.845 00:18:50.845 ' 00:18:50.845 17:28:10 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.845 17:28:10 -- nvmf/common.sh@7 -- # uname -s 00:18:50.845 17:28:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.845 17:28:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.845 17:28:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.845 17:28:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.845 17:28:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.845 17:28:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.845 17:28:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.845 17:28:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.845 17:28:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.845 17:28:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.845 17:28:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:50.845 17:28:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:50.845 17:28:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.845 17:28:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.845 17:28:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.845 17:28:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:50.845 17:28:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.845 17:28:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.845 17:28:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.845 17:28:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.845 17:28:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.845 17:28:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.845 17:28:10 -- paths/export.sh@5 -- # export PATH 00:18:50.845 17:28:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.845 17:28:10 -- nvmf/common.sh@46 -- # : 0 00:18:50.845 17:28:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:50.845 17:28:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:50.845 17:28:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:50.845 17:28:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.845 17:28:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.845 17:28:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:50.845 17:28:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:50.846 17:28:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:50.846 17:28:10 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.846 17:28:10 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.846 17:28:10 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:50.846 17:28:10 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:50.846 17:28:10 -- target/multipath.sh@43 -- # nvmftestinit 00:18:50.846 17:28:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:50.846 17:28:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.846 17:28:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:50.846 17:28:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:50.846 17:28:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:50.846 17:28:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.846 17:28:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.846 17:28:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.846 17:28:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:50.846 17:28:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:50.846 17:28:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:50.846 17:28:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.417 17:28:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:57.417 17:28:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:57.417 17:28:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:57.417 17:28:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:57.417 17:28:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:57.417 17:28:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:57.417 17:28:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:57.417 17:28:16 -- nvmf/common.sh@294 -- # net_devs=() 00:18:57.417 17:28:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:57.417 17:28:16 -- nvmf/common.sh@295 -- # e810=() 00:18:57.417 17:28:16 -- nvmf/common.sh@295 -- # local -ga e810 00:18:57.417 17:28:16 -- nvmf/common.sh@296 -- # x722=() 00:18:57.417 17:28:16 -- nvmf/common.sh@296 -- # local -ga x722 00:18:57.417 17:28:16 -- nvmf/common.sh@297 -- # mlx=() 00:18:57.417 17:28:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:57.417 17:28:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.417 17:28:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.417 17:28:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.418 17:28:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:57.418 17:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.418 17:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:57.418 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:57.418 17:28:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:57.418 17:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:57.418 17:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:57.418 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:57.418 17:28:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:57.418 17:28:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:57.418 17:28:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.418 17:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.418 17:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.418 17:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.418 17:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:57.418 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:57.418 17:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:57.418 17:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.418 17:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:57.418 17:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.418 17:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:57.418 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:57.418 17:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.418 17:28:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:57.418 17:28:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:57.418 17:28:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:57.418 17:28:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:57.418 17:28:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:57.418 17:28:16 -- nvmf/common.sh@57 -- # uname 00:18:57.418 17:28:17 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:57.418 17:28:17 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:57.418 17:28:17 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:57.418 17:28:17 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:57.418 17:28:17 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:57.418 17:28:17 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:57.418 17:28:17 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:57.418 17:28:17 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:57.418 17:28:17 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:57.418 17:28:17 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:57.418 17:28:17 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:57.418 17:28:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:57.418 17:28:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:57.418 17:28:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:57.418 17:28:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:57.418 17:28:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:57.418 17:28:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@104 -- # continue 2 00:18:57.418 17:28:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@104 -- # continue 2 00:18:57.418 17:28:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:57.418 17:28:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:57.418 17:28:17 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:57.418 17:28:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:57.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:57.418 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:57.418 altname enp217s0f0np0 00:18:57.418 altname ens818f0np0 00:18:57.418 inet 192.168.100.8/24 scope global mlx_0_0 00:18:57.418 valid_lft forever preferred_lft forever 00:18:57.418 17:28:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:57.418 17:28:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:57.418 17:28:17 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:57.418 17:28:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:57.418 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:57.418 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:57.418 altname enp217s0f1np1 00:18:57.418 altname ens818f1np1 00:18:57.418 inet 192.168.100.9/24 scope global mlx_0_1 00:18:57.418 valid_lft forever preferred_lft forever 00:18:57.418 17:28:17 -- nvmf/common.sh@410 -- # return 0 00:18:57.418 17:28:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:57.418 17:28:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:57.418 17:28:17 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:57.418 17:28:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:57.418 17:28:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:57.418 17:28:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:57.418 17:28:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:57.418 17:28:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:57.418 17:28:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:57.418 17:28:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@104 -- # continue 2 00:18:57.418 17:28:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.418 17:28:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:57.418 17:28:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@104 -- # continue 2 00:18:57.418 17:28:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:57.418 17:28:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:57.418 17:28:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:57.418 17:28:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:57.418 17:28:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:57.418 17:28:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:57.418 192.168.100.9' 00:18:57.418 17:28:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:57.418 192.168.100.9' 00:18:57.418 17:28:17 -- nvmf/common.sh@445 -- # head -n 1 00:18:57.418 17:28:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:57.418 17:28:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:57.418 192.168.100.9' 00:18:57.418 17:28:17 -- nvmf/common.sh@446 -- # tail -n +2 00:18:57.418 17:28:17 -- nvmf/common.sh@446 -- # head -n 1 00:18:57.677 17:28:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:57.677 17:28:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:57.677 17:28:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:57.677 17:28:17 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:18:57.677 17:28:17 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:18:57.677 17:28:17 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:18:57.677 run this test only with TCP transport for now 00:18:57.677 17:28:17 -- target/multipath.sh@53 -- # nvmftestfini 00:18:57.677 17:28:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.677 17:28:17 -- nvmf/common.sh@116 -- # sync 00:18:57.677 17:28:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@119 -- # set +e 00:18:57.677 17:28:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.677 17:28:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:57.677 rmmod nvme_rdma 00:18:57.677 rmmod nvme_fabrics 00:18:57.677 17:28:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.677 17:28:17 -- nvmf/common.sh@123 -- # set -e 00:18:57.677 17:28:17 -- nvmf/common.sh@124 -- # return 0 00:18:57.677 17:28:17 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:57.677 17:28:17 -- target/multipath.sh@54 -- # exit 0 00:18:57.677 17:28:17 -- target/multipath.sh@1 -- # nvmftestfini 00:18:57.677 17:28:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.677 17:28:17 -- nvmf/common.sh@116 -- # sync 00:18:57.677 17:28:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@119 -- # set +e 00:18:57.677 17:28:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.677 17:28:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:57.677 17:28:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.677 17:28:17 -- nvmf/common.sh@123 -- # set -e 00:18:57.677 17:28:17 -- nvmf/common.sh@124 -- # return 0 00:18:57.677 17:28:17 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.677 17:28:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:57.677 00:18:57.677 real 0m7.105s 00:18:57.677 user 0m2.029s 00:18:57.677 sys 0m5.283s 00:18:57.677 17:28:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:57.677 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:57.677 ************************************ 00:18:57.677 END TEST nvmf_multipath 00:18:57.677 ************************************ 00:18:57.677 17:28:17 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:57.677 17:28:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:57.677 17:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.677 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:18:57.677 ************************************ 00:18:57.677 START TEST nvmf_zcopy 00:18:57.677 ************************************ 00:18:57.677 17:28:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:57.677 * Looking for test storage... 00:18:57.677 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:57.677 17:28:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:57.677 17:28:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:57.677 17:28:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:57.936 17:28:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:57.936 17:28:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:57.936 17:28:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:57.936 17:28:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:57.936 17:28:17 -- scripts/common.sh@335 -- # IFS=.-: 00:18:57.936 17:28:17 -- scripts/common.sh@335 -- # read -ra ver1 00:18:57.936 17:28:17 -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.936 17:28:17 -- scripts/common.sh@336 -- # read -ra ver2 00:18:57.936 17:28:17 -- scripts/common.sh@337 -- # local 'op=<' 00:18:57.936 17:28:17 -- scripts/common.sh@339 -- # ver1_l=2 00:18:57.936 17:28:17 -- scripts/common.sh@340 -- # ver2_l=1 00:18:57.936 17:28:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:57.936 17:28:17 -- scripts/common.sh@343 -- # case "$op" in 00:18:57.936 17:28:17 -- scripts/common.sh@344 -- # : 1 00:18:57.936 17:28:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:57.936 17:28:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.936 17:28:17 -- scripts/common.sh@364 -- # decimal 1 00:18:57.936 17:28:17 -- scripts/common.sh@352 -- # local d=1 00:18:57.936 17:28:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.936 17:28:17 -- scripts/common.sh@354 -- # echo 1 00:18:57.936 17:28:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:57.936 17:28:17 -- scripts/common.sh@365 -- # decimal 2 00:18:57.936 17:28:17 -- scripts/common.sh@352 -- # local d=2 00:18:57.936 17:28:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.936 17:28:17 -- scripts/common.sh@354 -- # echo 2 00:18:57.936 17:28:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:57.936 17:28:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:57.936 17:28:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:57.936 17:28:17 -- scripts/common.sh@367 -- # return 0 00:18:57.936 17:28:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.936 17:28:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:57.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.936 --rc genhtml_branch_coverage=1 00:18:57.936 --rc genhtml_function_coverage=1 00:18:57.936 --rc genhtml_legend=1 00:18:57.936 --rc geninfo_all_blocks=1 00:18:57.936 --rc geninfo_unexecuted_blocks=1 00:18:57.936 00:18:57.936 ' 00:18:57.936 17:28:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:57.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.936 --rc genhtml_branch_coverage=1 00:18:57.936 --rc genhtml_function_coverage=1 00:18:57.936 --rc genhtml_legend=1 00:18:57.936 --rc geninfo_all_blocks=1 00:18:57.936 --rc geninfo_unexecuted_blocks=1 00:18:57.936 00:18:57.936 ' 00:18:57.936 17:28:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:57.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.937 --rc genhtml_branch_coverage=1 00:18:57.937 --rc genhtml_function_coverage=1 00:18:57.937 --rc genhtml_legend=1 00:18:57.937 --rc geninfo_all_blocks=1 00:18:57.937 --rc geninfo_unexecuted_blocks=1 00:18:57.937 00:18:57.937 ' 00:18:57.937 17:28:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:57.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.937 --rc genhtml_branch_coverage=1 00:18:57.937 --rc genhtml_function_coverage=1 00:18:57.937 --rc genhtml_legend=1 00:18:57.937 --rc geninfo_all_blocks=1 00:18:57.937 --rc geninfo_unexecuted_blocks=1 00:18:57.937 00:18:57.937 ' 00:18:57.937 17:28:17 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.937 17:28:17 -- nvmf/common.sh@7 -- # uname -s 00:18:57.937 17:28:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.937 17:28:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.937 17:28:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.937 17:28:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.937 17:28:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.937 17:28:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.937 17:28:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.937 17:28:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.937 17:28:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.937 17:28:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.937 17:28:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:57.937 17:28:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:57.937 17:28:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.937 17:28:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.937 17:28:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.937 17:28:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:57.937 17:28:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.937 17:28:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.937 17:28:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.937 17:28:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.937 17:28:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.937 17:28:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.937 17:28:17 -- paths/export.sh@5 -- # export PATH 00:18:57.937 17:28:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.937 17:28:17 -- nvmf/common.sh@46 -- # : 0 00:18:57.937 17:28:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:57.937 17:28:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:57.937 17:28:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:57.937 17:28:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.937 17:28:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.937 17:28:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:57.937 17:28:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:57.937 17:28:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:57.937 17:28:17 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:57.937 17:28:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:57.937 17:28:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.937 17:28:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.937 17:28:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.937 17:28:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.937 17:28:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.937 17:28:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.937 17:28:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.937 17:28:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.937 17:28:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.937 17:28:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.937 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 17:28:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.504 17:28:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:04.504 17:28:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:04.504 17:28:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:04.504 17:28:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:04.504 17:28:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:04.504 17:28:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:04.504 17:28:23 -- nvmf/common.sh@294 -- # net_devs=() 00:19:04.504 17:28:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:04.504 17:28:23 -- nvmf/common.sh@295 -- # e810=() 00:19:04.504 17:28:23 -- nvmf/common.sh@295 -- # local -ga e810 00:19:04.504 17:28:23 -- nvmf/common.sh@296 -- # x722=() 00:19:04.504 17:28:23 -- nvmf/common.sh@296 -- # local -ga x722 00:19:04.504 17:28:23 -- nvmf/common.sh@297 -- # mlx=() 00:19:04.504 17:28:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:04.504 17:28:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.504 17:28:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.505 17:28:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.505 17:28:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.505 17:28:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.505 17:28:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:04.505 17:28:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.505 17:28:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:04.505 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:04.505 17:28:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.505 17:28:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.505 17:28:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:04.505 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:04.505 17:28:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.505 17:28:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:04.505 17:28:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.505 17:28:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.505 17:28:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.505 17:28:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.505 17:28:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:04.505 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:04.505 17:28:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.505 17:28:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.505 17:28:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.505 17:28:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.505 17:28:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:04.505 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:04.505 17:28:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.505 17:28:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:04.505 17:28:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:04.505 17:28:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:04.505 17:28:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:04.505 17:28:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:04.505 17:28:23 -- nvmf/common.sh@57 -- # uname 00:19:04.505 17:28:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:04.505 17:28:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:04.505 17:28:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:04.505 17:28:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:04.505 17:28:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:04.505 17:28:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:04.505 17:28:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:04.505 17:28:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:04.505 17:28:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:04.505 17:28:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:04.505 17:28:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:04.505 17:28:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.505 17:28:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.505 17:28:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.505 17:28:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.505 17:28:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:04.505 17:28:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@104 -- # continue 2 00:19:04.505 17:28:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@104 -- # continue 2 00:19:04.505 17:28:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.505 17:28:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.505 17:28:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:04.505 17:28:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:04.505 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.505 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:04.505 altname enp217s0f0np0 00:19:04.505 altname ens818f0np0 00:19:04.505 inet 192.168.100.8/24 scope global mlx_0_0 00:19:04.505 valid_lft forever preferred_lft forever 00:19:04.505 17:28:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.505 17:28:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.505 17:28:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:04.505 17:28:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:04.505 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.505 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:04.505 altname enp217s0f1np1 00:19:04.505 altname ens818f1np1 00:19:04.505 inet 192.168.100.9/24 scope global mlx_0_1 00:19:04.505 valid_lft forever preferred_lft forever 00:19:04.505 17:28:24 -- nvmf/common.sh@410 -- # return 0 00:19:04.505 17:28:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:04.505 17:28:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:04.505 17:28:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:04.505 17:28:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:04.505 17:28:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.505 17:28:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.505 17:28:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.505 17:28:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.505 17:28:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:04.505 17:28:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@104 -- # continue 2 00:19:04.505 17:28:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.505 17:28:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.505 17:28:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@104 -- # continue 2 00:19:04.505 17:28:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:04.505 17:28:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.505 17:28:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:04.505 17:28:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:04.505 17:28:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.505 17:28:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:04.505 192.168.100.9' 00:19:04.505 17:28:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:04.505 192.168.100.9' 00:19:04.505 17:28:24 -- nvmf/common.sh@445 -- # head -n 1 00:19:04.505 17:28:24 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:04.505 17:28:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:04.505 192.168.100.9' 00:19:04.505 17:28:24 -- nvmf/common.sh@446 -- # tail -n +2 00:19:04.505 17:28:24 -- nvmf/common.sh@446 -- # head -n 1 00:19:04.505 17:28:24 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:04.505 17:28:24 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:04.505 17:28:24 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:04.505 17:28:24 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:04.505 17:28:24 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:04.506 17:28:24 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:04.506 17:28:24 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:04.506 17:28:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:04.506 17:28:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.506 17:28:24 -- common/autotest_common.sh@10 -- # set +x 00:19:04.506 17:28:24 -- nvmf/common.sh@469 -- # nvmfpid=2714683 00:19:04.506 17:28:24 -- nvmf/common.sh@470 -- # waitforlisten 2714683 00:19:04.506 17:28:24 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.506 17:28:24 -- common/autotest_common.sh@829 -- # '[' -z 2714683 ']' 00:19:04.506 17:28:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.506 17:28:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.506 17:28:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.506 17:28:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.506 17:28:24 -- common/autotest_common.sh@10 -- # set +x 00:19:04.765 [2024-11-09 17:28:24.290111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.765 [2024-11-09 17:28:24.290164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.765 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.765 [2024-11-09 17:28:24.362269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.765 [2024-11-09 17:28:24.435077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:04.765 [2024-11-09 17:28:24.435178] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.765 [2024-11-09 17:28:24.435189] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.765 [2024-11-09 17:28:24.435197] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.765 [2024-11-09 17:28:24.435216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.702 17:28:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.702 17:28:25 -- common/autotest_common.sh@862 -- # return 0 00:19:05.702 17:28:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:05.702 17:28:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:05.702 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:19:05.702 17:28:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.702 17:28:25 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:05.702 17:28:25 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:05.702 Unsupported transport: rdma 00:19:05.702 17:28:25 -- target/zcopy.sh@17 -- # exit 0 00:19:05.702 17:28:25 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:05.702 17:28:25 -- common/autotest_common.sh@806 -- # type=--id 00:19:05.702 17:28:25 -- common/autotest_common.sh@807 -- # id=0 00:19:05.702 17:28:25 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:05.702 17:28:25 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.702 17:28:25 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:05.702 17:28:25 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:05.702 17:28:25 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:05.702 17:28:25 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.702 nvmf_trace.0 00:19:05.702 17:28:25 -- common/autotest_common.sh@821 -- # return 0 00:19:05.702 17:28:25 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:05.702 17:28:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.702 17:28:25 -- nvmf/common.sh@116 -- # sync 00:19:05.702 17:28:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:05.702 17:28:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:05.702 17:28:25 -- nvmf/common.sh@119 -- # set +e 00:19:05.702 17:28:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.702 17:28:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:05.702 rmmod nvme_rdma 00:19:05.702 rmmod nvme_fabrics 00:19:05.702 17:28:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.702 17:28:25 -- nvmf/common.sh@123 -- # set -e 00:19:05.702 17:28:25 -- nvmf/common.sh@124 -- # return 0 00:19:05.702 17:28:25 -- nvmf/common.sh@477 -- # '[' -n 2714683 ']' 00:19:05.702 17:28:25 -- nvmf/common.sh@478 -- # killprocess 2714683 00:19:05.702 17:28:25 -- common/autotest_common.sh@936 -- # '[' -z 2714683 ']' 00:19:05.702 17:28:25 -- common/autotest_common.sh@940 -- # kill -0 2714683 00:19:05.702 17:28:25 -- common/autotest_common.sh@941 -- # uname 00:19:05.702 17:28:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.702 17:28:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2714683 00:19:05.702 17:28:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:05.702 17:28:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:05.702 17:28:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2714683' 00:19:05.702 killing process with pid 2714683 00:19:05.702 17:28:25 -- common/autotest_common.sh@955 -- # kill 2714683 00:19:05.702 17:28:25 -- common/autotest_common.sh@960 -- # wait 2714683 00:19:05.961 17:28:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.961 17:28:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:05.961 00:19:05.961 real 0m8.164s 00:19:05.961 user 0m3.348s 00:19:05.961 sys 0m5.497s 00:19:05.961 17:28:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:05.961 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:19:05.961 ************************************ 00:19:05.961 END TEST nvmf_zcopy 00:19:05.961 ************************************ 00:19:05.961 17:28:25 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:05.961 17:28:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:05.961 17:28:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.961 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:19:05.961 ************************************ 00:19:05.961 START TEST nvmf_nmic 00:19:05.961 ************************************ 00:19:05.961 17:28:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:05.961 * Looking for test storage... 00:19:05.961 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:05.961 17:28:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:05.961 17:28:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:05.961 17:28:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:05.961 17:28:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:05.961 17:28:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:05.961 17:28:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:05.961 17:28:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:05.961 17:28:25 -- scripts/common.sh@335 -- # IFS=.-: 00:19:05.961 17:28:25 -- scripts/common.sh@335 -- # read -ra ver1 00:19:05.961 17:28:25 -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.961 17:28:25 -- scripts/common.sh@336 -- # read -ra ver2 00:19:05.961 17:28:25 -- scripts/common.sh@337 -- # local 'op=<' 00:19:05.961 17:28:25 -- scripts/common.sh@339 -- # ver1_l=2 00:19:05.961 17:28:25 -- scripts/common.sh@340 -- # ver2_l=1 00:19:05.961 17:28:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:05.961 17:28:25 -- scripts/common.sh@343 -- # case "$op" in 00:19:05.961 17:28:25 -- scripts/common.sh@344 -- # : 1 00:19:05.961 17:28:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:05.961 17:28:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.961 17:28:25 -- scripts/common.sh@364 -- # decimal 1 00:19:05.961 17:28:25 -- scripts/common.sh@352 -- # local d=1 00:19:05.961 17:28:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.961 17:28:25 -- scripts/common.sh@354 -- # echo 1 00:19:06.220 17:28:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:06.220 17:28:25 -- scripts/common.sh@365 -- # decimal 2 00:19:06.220 17:28:25 -- scripts/common.sh@352 -- # local d=2 00:19:06.220 17:28:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.220 17:28:25 -- scripts/common.sh@354 -- # echo 2 00:19:06.220 17:28:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:06.220 17:28:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:06.220 17:28:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:06.220 17:28:25 -- scripts/common.sh@367 -- # return 0 00:19:06.220 17:28:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.220 17:28:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.220 --rc genhtml_branch_coverage=1 00:19:06.220 --rc genhtml_function_coverage=1 00:19:06.220 --rc genhtml_legend=1 00:19:06.220 --rc geninfo_all_blocks=1 00:19:06.220 --rc geninfo_unexecuted_blocks=1 00:19:06.220 00:19:06.220 ' 00:19:06.220 17:28:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.220 --rc genhtml_branch_coverage=1 00:19:06.220 --rc genhtml_function_coverage=1 00:19:06.220 --rc genhtml_legend=1 00:19:06.220 --rc geninfo_all_blocks=1 00:19:06.220 --rc geninfo_unexecuted_blocks=1 00:19:06.220 00:19:06.220 ' 00:19:06.220 17:28:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.220 --rc genhtml_branch_coverage=1 00:19:06.220 --rc genhtml_function_coverage=1 00:19:06.220 --rc genhtml_legend=1 00:19:06.220 --rc geninfo_all_blocks=1 00:19:06.220 --rc geninfo_unexecuted_blocks=1 00:19:06.220 00:19:06.220 ' 00:19:06.220 17:28:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.220 --rc genhtml_branch_coverage=1 00:19:06.220 --rc genhtml_function_coverage=1 00:19:06.220 --rc genhtml_legend=1 00:19:06.220 --rc geninfo_all_blocks=1 00:19:06.220 --rc geninfo_unexecuted_blocks=1 00:19:06.220 00:19:06.220 ' 00:19:06.220 17:28:25 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.220 17:28:25 -- nvmf/common.sh@7 -- # uname -s 00:19:06.220 17:28:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.220 17:28:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.220 17:28:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.220 17:28:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.220 17:28:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.220 17:28:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.220 17:28:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.220 17:28:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.220 17:28:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.220 17:28:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.220 17:28:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:06.220 17:28:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:06.220 17:28:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.220 17:28:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.220 17:28:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.221 17:28:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:06.221 17:28:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.221 17:28:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.221 17:28:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.221 17:28:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.221 17:28:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.221 17:28:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.221 17:28:25 -- paths/export.sh@5 -- # export PATH 00:19:06.221 17:28:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.221 17:28:25 -- nvmf/common.sh@46 -- # : 0 00:19:06.221 17:28:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:06.221 17:28:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:06.221 17:28:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:06.221 17:28:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.221 17:28:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.221 17:28:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:06.221 17:28:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:06.221 17:28:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:06.221 17:28:25 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.221 17:28:25 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.221 17:28:25 -- target/nmic.sh@14 -- # nvmftestinit 00:19:06.221 17:28:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:06.221 17:28:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.221 17:28:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:06.221 17:28:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:06.221 17:28:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:06.221 17:28:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.221 17:28:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.221 17:28:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.221 17:28:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:06.221 17:28:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:06.221 17:28:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:06.221 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:19:12.788 17:28:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.789 17:28:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:12.789 17:28:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:12.789 17:28:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:12.789 17:28:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:12.789 17:28:31 -- nvmf/common.sh@294 -- # net_devs=() 00:19:12.789 17:28:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@295 -- # e810=() 00:19:12.789 17:28:31 -- nvmf/common.sh@295 -- # local -ga e810 00:19:12.789 17:28:31 -- nvmf/common.sh@296 -- # x722=() 00:19:12.789 17:28:31 -- nvmf/common.sh@296 -- # local -ga x722 00:19:12.789 17:28:31 -- nvmf/common.sh@297 -- # mlx=() 00:19:12.789 17:28:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:12.789 17:28:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.789 17:28:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:12.789 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:12.789 17:28:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.789 17:28:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:12.789 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:12.789 17:28:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.789 17:28:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.789 17:28:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.789 17:28:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:12.789 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.789 17:28:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.789 17:28:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:12.789 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:12.789 17:28:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.789 17:28:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:12.789 17:28:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:12.789 17:28:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:12.789 17:28:31 -- nvmf/common.sh@57 -- # uname 00:19:12.789 17:28:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:12.789 17:28:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:12.789 17:28:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:12.789 17:28:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:12.789 17:28:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:12.789 17:28:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:12.789 17:28:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:12.789 17:28:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:12.789 17:28:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:12.789 17:28:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:12.789 17:28:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:12.789 17:28:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:12.789 17:28:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.789 17:28:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@104 -- # continue 2 00:19:12.789 17:28:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:12.789 17:28:31 -- nvmf/common.sh@104 -- # continue 2 00:19:12.789 17:28:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:12.789 17:28:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.789 17:28:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:12.789 17:28:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:12.789 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.789 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:12.789 altname enp217s0f0np0 00:19:12.789 altname ens818f0np0 00:19:12.789 inet 192.168.100.8/24 scope global mlx_0_0 00:19:12.789 valid_lft forever preferred_lft forever 00:19:12.789 17:28:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:12.789 17:28:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:12.789 17:28:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.789 17:28:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.789 17:28:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:12.789 17:28:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:12.789 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.789 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:12.789 altname enp217s0f1np1 00:19:12.789 altname ens818f1np1 00:19:12.789 inet 192.168.100.9/24 scope global mlx_0_1 00:19:12.789 valid_lft forever preferred_lft forever 00:19:12.789 17:28:31 -- nvmf/common.sh@410 -- # return 0 00:19:12.789 17:28:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:12.789 17:28:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:12.789 17:28:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:12.789 17:28:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:12.789 17:28:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:12.789 17:28:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:12.789 17:28:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.789 17:28:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:12.789 17:28:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:12.789 17:28:31 -- nvmf/common.sh@104 -- # continue 2 00:19:12.789 17:28:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.789 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.789 17:28:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.790 17:28:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.790 17:28:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:12.790 17:28:31 -- nvmf/common.sh@104 -- # continue 2 00:19:12.790 17:28:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:12.790 17:28:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:12.790 17:28:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.790 17:28:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:12.790 17:28:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:12.790 17:28:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.790 17:28:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.790 17:28:32 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:12.790 192.168.100.9' 00:19:12.790 17:28:32 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:12.790 192.168.100.9' 00:19:12.790 17:28:32 -- nvmf/common.sh@445 -- # head -n 1 00:19:12.790 17:28:32 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:12.790 17:28:32 -- nvmf/common.sh@446 -- # tail -n +2 00:19:12.790 17:28:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:12.790 192.168.100.9' 00:19:12.790 17:28:32 -- nvmf/common.sh@446 -- # head -n 1 00:19:12.790 17:28:32 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:12.790 17:28:32 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:12.790 17:28:32 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:12.790 17:28:32 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:12.790 17:28:32 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:12.790 17:28:32 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:12.790 17:28:32 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:12.790 17:28:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:12.790 17:28:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.790 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:19:12.790 17:28:32 -- nvmf/common.sh@469 -- # nvmfpid=2718086 00:19:12.790 17:28:32 -- nvmf/common.sh@470 -- # waitforlisten 2718086 00:19:12.790 17:28:32 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.790 17:28:32 -- common/autotest_common.sh@829 -- # '[' -z 2718086 ']' 00:19:12.790 17:28:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.790 17:28:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.790 17:28:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.790 17:28:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.790 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:19:12.790 [2024-11-09 17:28:32.104980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:12.790 [2024-11-09 17:28:32.105033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.790 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.790 [2024-11-09 17:28:32.174858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.790 [2024-11-09 17:28:32.244527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.790 [2024-11-09 17:28:32.244638] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.790 [2024-11-09 17:28:32.244647] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.790 [2024-11-09 17:28:32.244656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.790 [2024-11-09 17:28:32.244705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.790 [2024-11-09 17:28:32.244733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.790 [2024-11-09 17:28:32.244750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.790 [2024-11-09 17:28:32.244752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.358 17:28:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.358 17:28:32 -- common/autotest_common.sh@862 -- # return 0 00:19:13.358 17:28:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.358 17:28:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.358 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:19:13.358 17:28:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.358 17:28:32 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:13.358 17:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.358 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:19:13.358 [2024-11-09 17:28:33.003024] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1410090/0x1414580) succeed. 00:19:13.358 [2024-11-09 17:28:33.012317] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1411680/0x1455c20) succeed. 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 Malloc0 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 [2024-11-09 17:28:33.183269] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:13.618 test case1: single bdev can't be used in multiple subsystems 00:19:13.618 17:28:33 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@28 -- # nmic_status=0 00:19:13.618 17:28:33 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 [2024-11-09 17:28:33.207008] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:13.618 [2024-11-09 17:28:33.207028] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:13.618 [2024-11-09 17:28:33.207038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.618 request: 00:19:13.618 { 00:19:13.618 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:13.618 "namespace": { 00:19:13.618 "bdev_name": "Malloc0" 00:19:13.618 }, 00:19:13.618 "method": "nvmf_subsystem_add_ns", 00:19:13.618 "req_id": 1 00:19:13.618 } 00:19:13.618 Got JSON-RPC error response 00:19:13.618 response: 00:19:13.618 { 00:19:13.618 "code": -32602, 00:19:13.618 "message": "Invalid parameters" 00:19:13.618 } 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@29 -- # nmic_status=1 00:19:13.618 17:28:33 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:13.618 17:28:33 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:13.618 Adding namespace failed - expected result. 00:19:13.618 17:28:33 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:13.618 test case2: host connect to nvmf target in multiple paths 00:19:13.618 17:28:33 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:13.618 17:28:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.618 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:13.618 [2024-11-09 17:28:33.219078] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:13.618 17:28:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.618 17:28:33 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:14.555 17:28:34 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:15.491 17:28:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:15.491 17:28:35 -- common/autotest_common.sh@1187 -- # local i=0 00:19:15.491 17:28:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:15.491 17:28:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:15.491 17:28:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:18.025 17:28:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:18.025 17:28:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:18.025 17:28:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:18.025 17:28:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:18.025 17:28:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.026 17:28:37 -- common/autotest_common.sh@1197 -- # return 0 00:19:18.026 17:28:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:18.026 [global] 00:19:18.026 thread=1 00:19:18.026 invalidate=1 00:19:18.026 rw=write 00:19:18.026 time_based=1 00:19:18.026 runtime=1 00:19:18.026 ioengine=libaio 00:19:18.026 direct=1 00:19:18.026 bs=4096 00:19:18.026 iodepth=1 00:19:18.026 norandommap=0 00:19:18.026 numjobs=1 00:19:18.026 00:19:18.026 verify_dump=1 00:19:18.026 verify_backlog=512 00:19:18.026 verify_state_save=0 00:19:18.026 do_verify=1 00:19:18.026 verify=crc32c-intel 00:19:18.026 [job0] 00:19:18.026 filename=/dev/nvme0n1 00:19:18.026 Could not set queue depth (nvme0n1) 00:19:18.026 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.026 fio-3.35 00:19:18.026 Starting 1 thread 00:19:18.962 00:19:18.962 job0: (groupid=0, jobs=1): err= 0: pid=2719312: Sat Nov 9 17:28:38 2024 00:19:18.962 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:19:18.962 slat (nsec): min=8160, max=33546, avg=8719.32, stdev=953.53 00:19:18.962 clat (usec): min=43, max=106, avg=60.87, stdev= 4.40 00:19:18.962 lat (usec): min=57, max=115, avg=69.59, stdev= 4.46 00:19:18.962 clat percentiles (usec): 00:19:18.962 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:19:18.962 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:19:18.962 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 69], 00:19:18.962 | 99.00th=[ 73], 99.50th=[ 74], 99.90th=[ 80], 99.95th=[ 85], 00:19:18.962 | 99.99th=[ 108] 00:19:18.962 write: IOPS=7158, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:18.962 slat (nsec): min=8342, max=41533, avg=11277.06, stdev=1123.00 00:19:18.962 clat (nsec): min=29051, max=99501, avg=58718.70, stdev=4539.92 00:19:18.962 lat (usec): min=55, max=140, avg=70.00, stdev= 4.66 00:19:18.962 clat percentiles (nsec): 00:19:18.962 | 1.00th=[49408], 5.00th=[51456], 10.00th=[52992], 20.00th=[55040], 00:19:18.962 | 30.00th=[56064], 40.00th=[57600], 50.00th=[58624], 60.00th=[59648], 00:19:18.962 | 70.00th=[61184], 80.00th=[62720], 90.00th=[64256], 95.00th=[66048], 00:19:18.962 | 99.00th=[70144], 99.50th=[71168], 99.90th=[74240], 99.95th=[80384], 00:19:18.962 | 99.99th=[99840] 00:19:18.962 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:19:18.962 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:19:18.962 lat (usec) : 50=0.89%, 100=99.10%, 250=0.01% 00:19:18.962 cpu : usr=11.40%, sys=17.40%, ctx=13822, majf=0, minf=1 00:19:18.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:18.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.962 issued rwts: total=6656,7166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:18.962 00:19:18.962 Run status group 0 (all jobs): 00:19:18.962 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:19:18.962 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:18.962 00:19:18.962 Disk stats (read/write): 00:19:18.962 nvme0n1: ios=6193/6221, merge=0/0, ticks=327/327, in_queue=654, util=90.58% 00:19:18.962 17:28:38 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:20.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:20.901 17:28:40 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:20.901 17:28:40 -- common/autotest_common.sh@1208 -- # local i=0 00:19:20.901 17:28:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:20.901 17:28:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.901 17:28:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:20.901 17:28:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.901 17:28:40 -- common/autotest_common.sh@1220 -- # return 0 00:19:20.901 17:28:40 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:20.901 17:28:40 -- target/nmic.sh@53 -- # nvmftestfini 00:19:20.901 17:28:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:20.901 17:28:40 -- nvmf/common.sh@116 -- # sync 00:19:20.901 17:28:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:20.901 17:28:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:20.901 17:28:40 -- nvmf/common.sh@119 -- # set +e 00:19:20.901 17:28:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:20.901 17:28:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:20.901 rmmod nvme_rdma 00:19:21.212 rmmod nvme_fabrics 00:19:21.212 17:28:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.212 17:28:40 -- nvmf/common.sh@123 -- # set -e 00:19:21.212 17:28:40 -- nvmf/common.sh@124 -- # return 0 00:19:21.212 17:28:40 -- nvmf/common.sh@477 -- # '[' -n 2718086 ']' 00:19:21.212 17:28:40 -- nvmf/common.sh@478 -- # killprocess 2718086 00:19:21.212 17:28:40 -- common/autotest_common.sh@936 -- # '[' -z 2718086 ']' 00:19:21.212 17:28:40 -- common/autotest_common.sh@940 -- # kill -0 2718086 00:19:21.212 17:28:40 -- common/autotest_common.sh@941 -- # uname 00:19:21.212 17:28:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.212 17:28:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2718086 00:19:21.212 17:28:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.212 17:28:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.212 17:28:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2718086' 00:19:21.212 killing process with pid 2718086 00:19:21.212 17:28:40 -- common/autotest_common.sh@955 -- # kill 2718086 00:19:21.212 17:28:40 -- common/autotest_common.sh@960 -- # wait 2718086 00:19:21.472 17:28:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.472 17:28:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:21.472 00:19:21.472 real 0m15.492s 00:19:21.472 user 0m44.769s 00:19:21.472 sys 0m5.778s 00:19:21.472 17:28:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:21.472 17:28:41 -- common/autotest_common.sh@10 -- # set +x 00:19:21.472 ************************************ 00:19:21.472 END TEST nvmf_nmic 00:19:21.472 ************************************ 00:19:21.472 17:28:41 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:21.472 17:28:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:21.472 17:28:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.472 17:28:41 -- common/autotest_common.sh@10 -- # set +x 00:19:21.472 ************************************ 00:19:21.472 START TEST nvmf_fio_target 00:19:21.472 ************************************ 00:19:21.472 17:28:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:21.472 * Looking for test storage... 00:19:21.472 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:21.472 17:28:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:21.472 17:28:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:21.472 17:28:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:21.732 17:28:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:21.732 17:28:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:21.732 17:28:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:21.732 17:28:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:21.732 17:28:41 -- scripts/common.sh@335 -- # IFS=.-: 00:19:21.732 17:28:41 -- scripts/common.sh@335 -- # read -ra ver1 00:19:21.732 17:28:41 -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.732 17:28:41 -- scripts/common.sh@336 -- # read -ra ver2 00:19:21.732 17:28:41 -- scripts/common.sh@337 -- # local 'op=<' 00:19:21.732 17:28:41 -- scripts/common.sh@339 -- # ver1_l=2 00:19:21.732 17:28:41 -- scripts/common.sh@340 -- # ver2_l=1 00:19:21.732 17:28:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:21.732 17:28:41 -- scripts/common.sh@343 -- # case "$op" in 00:19:21.732 17:28:41 -- scripts/common.sh@344 -- # : 1 00:19:21.732 17:28:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:21.732 17:28:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.732 17:28:41 -- scripts/common.sh@364 -- # decimal 1 00:19:21.732 17:28:41 -- scripts/common.sh@352 -- # local d=1 00:19:21.732 17:28:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.732 17:28:41 -- scripts/common.sh@354 -- # echo 1 00:19:21.732 17:28:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:21.732 17:28:41 -- scripts/common.sh@365 -- # decimal 2 00:19:21.732 17:28:41 -- scripts/common.sh@352 -- # local d=2 00:19:21.732 17:28:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.732 17:28:41 -- scripts/common.sh@354 -- # echo 2 00:19:21.732 17:28:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:21.732 17:28:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:21.732 17:28:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:21.732 17:28:41 -- scripts/common.sh@367 -- # return 0 00:19:21.732 17:28:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.732 17:28:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:21.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.732 --rc genhtml_branch_coverage=1 00:19:21.732 --rc genhtml_function_coverage=1 00:19:21.732 --rc genhtml_legend=1 00:19:21.732 --rc geninfo_all_blocks=1 00:19:21.732 --rc geninfo_unexecuted_blocks=1 00:19:21.732 00:19:21.732 ' 00:19:21.732 17:28:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:21.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.732 --rc genhtml_branch_coverage=1 00:19:21.732 --rc genhtml_function_coverage=1 00:19:21.732 --rc genhtml_legend=1 00:19:21.732 --rc geninfo_all_blocks=1 00:19:21.732 --rc geninfo_unexecuted_blocks=1 00:19:21.732 00:19:21.732 ' 00:19:21.732 17:28:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:21.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.732 --rc genhtml_branch_coverage=1 00:19:21.732 --rc genhtml_function_coverage=1 00:19:21.732 --rc genhtml_legend=1 00:19:21.732 --rc geninfo_all_blocks=1 00:19:21.732 --rc geninfo_unexecuted_blocks=1 00:19:21.732 00:19:21.732 ' 00:19:21.732 17:28:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:21.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.732 --rc genhtml_branch_coverage=1 00:19:21.732 --rc genhtml_function_coverage=1 00:19:21.732 --rc genhtml_legend=1 00:19:21.732 --rc geninfo_all_blocks=1 00:19:21.732 --rc geninfo_unexecuted_blocks=1 00:19:21.732 00:19:21.732 ' 00:19:21.732 17:28:41 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.732 17:28:41 -- nvmf/common.sh@7 -- # uname -s 00:19:21.732 17:28:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.732 17:28:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.732 17:28:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.732 17:28:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.732 17:28:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.732 17:28:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.732 17:28:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.732 17:28:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.732 17:28:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.732 17:28:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.732 17:28:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:21.732 17:28:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:21.732 17:28:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.732 17:28:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.732 17:28:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.732 17:28:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:21.732 17:28:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.732 17:28:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.732 17:28:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.732 17:28:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.732 17:28:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.732 17:28:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.732 17:28:41 -- paths/export.sh@5 -- # export PATH 00:19:21.732 17:28:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.733 17:28:41 -- nvmf/common.sh@46 -- # : 0 00:19:21.733 17:28:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:21.733 17:28:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:21.733 17:28:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:21.733 17:28:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.733 17:28:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.733 17:28:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:21.733 17:28:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:21.733 17:28:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:21.733 17:28:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.733 17:28:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.733 17:28:41 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:21.733 17:28:41 -- target/fio.sh@16 -- # nvmftestinit 00:19:21.733 17:28:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:21.733 17:28:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.733 17:28:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:21.733 17:28:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:21.733 17:28:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:21.733 17:28:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.733 17:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.733 17:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.733 17:28:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:21.733 17:28:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:21.733 17:28:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:21.733 17:28:41 -- common/autotest_common.sh@10 -- # set +x 00:19:28.306 17:28:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.306 17:28:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.306 17:28:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.306 17:28:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.306 17:28:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.306 17:28:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.306 17:28:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.306 17:28:47 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.306 17:28:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.306 17:28:47 -- nvmf/common.sh@295 -- # e810=() 00:19:28.306 17:28:47 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.306 17:28:47 -- nvmf/common.sh@296 -- # x722=() 00:19:28.306 17:28:47 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.306 17:28:47 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.306 17:28:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.306 17:28:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.306 17:28:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.306 17:28:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:28.306 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:28.306 17:28:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.306 17:28:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.306 17:28:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:28.306 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:28.306 17:28:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.306 17:28:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.306 17:28:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.306 17:28:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.306 17:28:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:28.306 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:28.306 17:28:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.306 17:28:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.306 17:28:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.306 17:28:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:28.306 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:28.306 17:28:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.306 17:28:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.306 17:28:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:28.306 17:28:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:28.306 17:28:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:28.306 17:28:47 -- nvmf/common.sh@57 -- # uname 00:19:28.306 17:28:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:28.306 17:28:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:28.306 17:28:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:28.306 17:28:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:28.306 17:28:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:28.306 17:28:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:28.306 17:28:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:28.306 17:28:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:28.306 17:28:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:28.306 17:28:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.306 17:28:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:28.306 17:28:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.306 17:28:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:28.306 17:28:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:28.306 17:28:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.306 17:28:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:28.306 17:28:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.306 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@104 -- # continue 2 00:19:28.307 17:28:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@104 -- # continue 2 00:19:28.307 17:28:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:28.307 17:28:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.307 17:28:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:28.307 17:28:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:28.307 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.307 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:28.307 altname enp217s0f0np0 00:19:28.307 altname ens818f0np0 00:19:28.307 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.307 valid_lft forever preferred_lft forever 00:19:28.307 17:28:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:28.307 17:28:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.307 17:28:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:28.307 17:28:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:28.307 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.307 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:28.307 altname enp217s0f1np1 00:19:28.307 altname ens818f1np1 00:19:28.307 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.307 valid_lft forever preferred_lft forever 00:19:28.307 17:28:47 -- nvmf/common.sh@410 -- # return 0 00:19:28.307 17:28:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.307 17:28:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.307 17:28:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:28.307 17:28:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:28.307 17:28:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.307 17:28:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:28.307 17:28:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:28.307 17:28:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.307 17:28:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:28.307 17:28:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@104 -- # continue 2 00:19:28.307 17:28:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.307 17:28:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.307 17:28:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@104 -- # continue 2 00:19:28.307 17:28:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:28.307 17:28:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.307 17:28:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:28.307 17:28:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:28.307 17:28:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:28.307 17:28:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.307 192.168.100.9' 00:19:28.307 17:28:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:28.307 192.168.100.9' 00:19:28.307 17:28:47 -- nvmf/common.sh@445 -- # head -n 1 00:19:28.307 17:28:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.307 17:28:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:28.307 192.168.100.9' 00:19:28.307 17:28:47 -- nvmf/common.sh@446 -- # tail -n +2 00:19:28.307 17:28:47 -- nvmf/common.sh@446 -- # head -n 1 00:19:28.307 17:28:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.307 17:28:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:28.307 17:28:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.307 17:28:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:28.307 17:28:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:28.307 17:28:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:28.307 17:28:47 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:28.307 17:28:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.307 17:28:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.307 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.307 17:28:47 -- nvmf/common.sh@469 -- # nvmfpid=2723061 00:19:28.307 17:28:47 -- nvmf/common.sh@470 -- # waitforlisten 2723061 00:19:28.307 17:28:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:28.307 17:28:47 -- common/autotest_common.sh@829 -- # '[' -z 2723061 ']' 00:19:28.307 17:28:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.307 17:28:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.307 17:28:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.307 17:28:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.307 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.307 [2024-11-09 17:28:48.014365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:28.307 [2024-11-09 17:28:48.014423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.307 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.567 [2024-11-09 17:28:48.084849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.567 [2024-11-09 17:28:48.152897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.567 [2024-11-09 17:28:48.153028] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.567 [2024-11-09 17:28:48.153038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.567 [2024-11-09 17:28:48.153047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.567 [2024-11-09 17:28:48.153097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.567 [2024-11-09 17:28:48.153193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.567 [2024-11-09 17:28:48.153281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.567 [2024-11-09 17:28:48.153283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.134 17:28:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.134 17:28:48 -- common/autotest_common.sh@862 -- # return 0 00:19:29.134 17:28:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:29.134 17:28:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.134 17:28:48 -- common/autotest_common.sh@10 -- # set +x 00:19:29.134 17:28:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.134 17:28:48 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:29.393 [2024-11-09 17:28:49.060812] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c19090/0x1c1d580) succeed. 00:19:29.393 [2024-11-09 17:28:49.069934] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c1a680/0x1c5ec20) succeed. 00:19:29.653 17:28:49 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:29.653 17:28:49 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:29.653 17:28:49 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:29.912 17:28:49 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:29.912 17:28:49 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:30.170 17:28:49 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:30.170 17:28:49 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:30.429 17:28:50 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:30.429 17:28:50 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:30.688 17:28:50 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:30.688 17:28:50 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:30.688 17:28:50 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:30.947 17:28:50 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:30.947 17:28:50 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:31.206 17:28:50 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:31.206 17:28:50 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:31.464 17:28:51 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:31.464 17:28:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:31.464 17:28:51 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.723 17:28:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:31.723 17:28:51 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.982 17:28:51 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:31.982 [2024-11-09 17:28:51.739838] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:32.241 17:28:51 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:32.241 17:28:51 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:32.500 17:28:52 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.436 17:28:53 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:33.436 17:28:53 -- common/autotest_common.sh@1187 -- # local i=0 00:19:33.436 17:28:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.436 17:28:53 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:19:33.436 17:28:53 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:19:33.436 17:28:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:35.341 17:28:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:35.341 17:28:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:35.341 17:28:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.600 17:28:55 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:19:35.600 17:28:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.600 17:28:55 -- common/autotest_common.sh@1197 -- # return 0 00:19:35.600 17:28:55 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:35.600 [global] 00:19:35.600 thread=1 00:19:35.600 invalidate=1 00:19:35.600 rw=write 00:19:35.600 time_based=1 00:19:35.600 runtime=1 00:19:35.600 ioengine=libaio 00:19:35.600 direct=1 00:19:35.600 bs=4096 00:19:35.600 iodepth=1 00:19:35.600 norandommap=0 00:19:35.600 numjobs=1 00:19:35.600 00:19:35.600 verify_dump=1 00:19:35.600 verify_backlog=512 00:19:35.600 verify_state_save=0 00:19:35.600 do_verify=1 00:19:35.600 verify=crc32c-intel 00:19:35.600 [job0] 00:19:35.600 filename=/dev/nvme0n1 00:19:35.600 [job1] 00:19:35.600 filename=/dev/nvme0n2 00:19:35.600 [job2] 00:19:35.600 filename=/dev/nvme0n3 00:19:35.600 [job3] 00:19:35.600 filename=/dev/nvme0n4 00:19:35.600 Could not set queue depth (nvme0n1) 00:19:35.600 Could not set queue depth (nvme0n2) 00:19:35.600 Could not set queue depth (nvme0n3) 00:19:35.600 Could not set queue depth (nvme0n4) 00:19:35.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.859 fio-3.35 00:19:35.859 Starting 4 threads 00:19:37.236 00:19:37.236 job0: (groupid=0, jobs=1): err= 0: pid=2724610: Sat Nov 9 17:28:56 2024 00:19:37.236 read: IOPS=3847, BW=15.0MiB/s (15.8MB/s)(15.0MiB/1001msec) 00:19:37.236 slat (nsec): min=8119, max=37251, avg=9431.19, stdev=2413.33 00:19:37.236 clat (usec): min=64, max=195, avg=117.59, stdev=20.38 00:19:37.236 lat (usec): min=72, max=205, avg=127.02, stdev=20.68 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 80], 20.00th=[ 104], 00:19:37.236 | 30.00th=[ 111], 40.00th=[ 118], 50.00th=[ 126], 60.00th=[ 129], 00:19:37.236 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 137], 95.00th=[ 141], 00:19:37.236 | 99.00th=[ 147], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 178], 00:19:37.236 | 99.99th=[ 196] 00:19:37.236 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:37.236 slat (nsec): min=8611, max=38065, avg=11456.86, stdev=1360.62 00:19:37.236 clat (usec): min=50, max=198, avg=108.26, stdev=23.75 00:19:37.236 lat (usec): min=61, max=210, avg=119.72, stdev=23.90 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 66], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 78], 00:19:37.236 | 30.00th=[ 97], 40.00th=[ 104], 50.00th=[ 118], 60.00th=[ 125], 00:19:37.236 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 133], 95.00th=[ 135], 00:19:37.236 | 99.00th=[ 143], 99.50th=[ 147], 99.90th=[ 165], 99.95th=[ 172], 00:19:37.236 | 99.99th=[ 200] 00:19:37.236 bw ( KiB/s): min=16384, max=16384, per=27.44%, avg=16384.00, stdev= 0.00, samples=1 00:19:37.236 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:37.236 lat (usec) : 100=25.75%, 250=74.25% 00:19:37.236 cpu : usr=5.70%, sys=10.80%, ctx=7947, majf=0, minf=1 00:19:37.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 issued rwts: total=3851,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:37.236 job1: (groupid=0, jobs=1): err= 0: pid=2724611: Sat Nov 9 17:28:56 2024 00:19:37.236 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:37.236 slat (nsec): min=8068, max=45219, avg=9580.66, stdev=1505.00 00:19:37.236 clat (usec): min=66, max=383, avg=125.94, stdev=15.45 00:19:37.236 lat (usec): min=75, max=392, avg=135.52, stdev=15.78 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 92], 5.00th=[ 101], 10.00th=[ 106], 20.00th=[ 113], 00:19:37.236 | 30.00th=[ 121], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:19:37.236 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:19:37.236 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 200], 00:19:37.236 | 99.99th=[ 383] 00:19:37.236 write: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1001msec); 0 zone resets 00:19:37.236 slat (nsec): min=10234, max=39696, avg=12145.05, stdev=1679.74 00:19:37.236 clat (usec): min=65, max=205, avg=122.39, stdev=17.54 00:19:37.236 lat (usec): min=77, max=230, avg=134.53, stdev=17.96 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 88], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 104], 00:19:37.236 | 30.00th=[ 118], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:19:37.236 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:19:37.236 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 202], 99.95th=[ 204], 00:19:37.236 | 99.99th=[ 206] 00:19:37.236 bw ( KiB/s): min=16384, max=16384, per=27.44%, avg=16384.00, stdev= 0.00, samples=1 00:19:37.236 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:37.236 lat (usec) : 100=8.95%, 250=91.03%, 500=0.01% 00:19:37.236 cpu : usr=6.60%, sys=10.30%, ctx=7260, majf=0, minf=1 00:19:37.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 issued rwts: total=3584,3676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:37.236 job2: (groupid=0, jobs=1): err= 0: pid=2724612: Sat Nov 9 17:28:56 2024 00:19:37.236 read: IOPS=3474, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1001msec) 00:19:37.236 slat (nsec): min=8365, max=36769, avg=10269.17, stdev=2888.03 00:19:37.236 clat (usec): min=72, max=319, avg=131.78, stdev=14.80 00:19:37.236 lat (usec): min=80, max=329, avg=142.05, stdev=15.44 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 82], 5.00th=[ 114], 10.00th=[ 120], 20.00th=[ 125], 00:19:37.236 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:19:37.236 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 155], 00:19:37.236 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 221], 00:19:37.236 | 99.99th=[ 322] 00:19:37.236 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:37.236 slat (nsec): min=10517, max=39456, avg=11830.42, stdev=1413.94 00:19:37.236 clat (usec): min=67, max=187, avg=124.13, stdev= 9.61 00:19:37.236 lat (usec): min=78, max=207, avg=135.96, stdev= 9.77 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 118], 00:19:37.236 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:19:37.236 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 137], 00:19:37.236 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 176], 00:19:37.236 | 99.99th=[ 188] 00:19:37.236 bw ( KiB/s): min=16384, max=16384, per=27.44%, avg=16384.00, stdev= 0.00, samples=1 00:19:37.236 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:37.236 lat (usec) : 100=2.19%, 250=97.79%, 500=0.01% 00:19:37.236 cpu : usr=5.50%, sys=9.40%, ctx=7063, majf=0, minf=1 00:19:37.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 issued rwts: total=3478,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:37.236 job3: (groupid=0, jobs=1): err= 0: pid=2724613: Sat Nov 9 17:28:56 2024 00:19:37.236 read: IOPS=3465, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1001msec) 00:19:37.236 slat (nsec): min=8344, max=33470, avg=9561.07, stdev=1599.92 00:19:37.236 clat (usec): min=74, max=359, avg=132.83, stdev=13.78 00:19:37.236 lat (usec): min=83, max=368, avg=142.39, stdev=13.75 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 88], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:19:37.236 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:19:37.236 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:19:37.236 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 190], 00:19:37.236 | 99.99th=[ 359] 00:19:37.236 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:37.236 slat (nsec): min=10293, max=41737, avg=12439.02, stdev=3083.07 00:19:37.236 clat (usec): min=76, max=172, avg=123.86, stdev= 9.37 00:19:37.236 lat (usec): min=88, max=185, avg=136.30, stdev= 9.29 00:19:37.236 clat percentiles (usec): 00:19:37.236 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 117], 00:19:37.236 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:19:37.236 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 137], 00:19:37.236 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 165], 00:19:37.236 | 99.99th=[ 174] 00:19:37.236 bw ( KiB/s): min=16384, max=16384, per=27.44%, avg=16384.00, stdev= 0.00, samples=1 00:19:37.236 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:37.236 lat (usec) : 100=1.55%, 250=98.44%, 500=0.01% 00:19:37.236 cpu : usr=5.30%, sys=10.20%, ctx=7053, majf=0, minf=1 00:19:37.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.236 issued rwts: total=3469,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:37.236 00:19:37.236 Run status group 0 (all jobs): 00:19:37.236 READ: bw=56.1MiB/s (58.8MB/s), 13.5MiB/s-15.0MiB/s (14.2MB/s-15.8MB/s), io=56.2MiB (58.9MB), run=1001-1001msec 00:19:37.236 WRITE: bw=58.3MiB/s (61.1MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=58.4MiB (61.2MB), run=1001-1001msec 00:19:37.236 00:19:37.236 Disk stats (read/write): 00:19:37.236 nvme0n1: ios=3268/3584, merge=0/0, ticks=339/337, in_queue=676, util=84.47% 00:19:37.236 nvme0n2: ios=3047/3072, merge=0/0, ticks=370/368, in_queue=738, util=85.50% 00:19:37.236 nvme0n3: ios=2850/3072, merge=0/0, ticks=351/360, in_queue=711, util=88.48% 00:19:37.236 nvme0n4: ios=2838/3072, merge=0/0, ticks=351/356, in_queue=707, util=89.52% 00:19:37.236 17:28:56 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:37.236 [global] 00:19:37.236 thread=1 00:19:37.236 invalidate=1 00:19:37.236 rw=randwrite 00:19:37.236 time_based=1 00:19:37.236 runtime=1 00:19:37.236 ioengine=libaio 00:19:37.236 direct=1 00:19:37.236 bs=4096 00:19:37.236 iodepth=1 00:19:37.236 norandommap=0 00:19:37.236 numjobs=1 00:19:37.236 00:19:37.236 verify_dump=1 00:19:37.236 verify_backlog=512 00:19:37.236 verify_state_save=0 00:19:37.236 do_verify=1 00:19:37.236 verify=crc32c-intel 00:19:37.236 [job0] 00:19:37.236 filename=/dev/nvme0n1 00:19:37.236 [job1] 00:19:37.236 filename=/dev/nvme0n2 00:19:37.236 [job2] 00:19:37.236 filename=/dev/nvme0n3 00:19:37.236 [job3] 00:19:37.236 filename=/dev/nvme0n4 00:19:37.236 Could not set queue depth (nvme0n1) 00:19:37.236 Could not set queue depth (nvme0n2) 00:19:37.236 Could not set queue depth (nvme0n3) 00:19:37.236 Could not set queue depth (nvme0n4) 00:19:37.495 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:37.495 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:37.495 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:37.495 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:37.495 fio-3.35 00:19:37.495 Starting 4 threads 00:19:38.873 00:19:38.873 job0: (groupid=0, jobs=1): err= 0: pid=2725044: Sat Nov 9 17:28:58 2024 00:19:38.873 read: IOPS=3895, BW=15.2MiB/s (16.0MB/s)(15.2MiB/1001msec) 00:19:38.873 slat (nsec): min=8356, max=36601, avg=9471.68, stdev=1337.37 00:19:38.873 clat (usec): min=66, max=186, avg=114.26, stdev=17.42 00:19:38.873 lat (usec): min=76, max=195, avg=123.73, stdev=17.35 00:19:38.873 clat percentiles (usec): 00:19:38.873 | 1.00th=[ 73], 5.00th=[ 78], 10.00th=[ 84], 20.00th=[ 105], 00:19:38.873 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:19:38.873 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 137], 00:19:38.873 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 178], 00:19:38.873 | 99.99th=[ 188] 00:19:38.873 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:38.873 slat (nsec): min=10274, max=42328, avg=11570.48, stdev=1570.18 00:19:38.873 clat (usec): min=51, max=188, avg=109.75, stdev=16.81 00:19:38.873 lat (usec): min=75, max=199, avg=121.32, stdev=16.62 00:19:38.873 clat percentiles (usec): 00:19:38.873 | 1.00th=[ 70], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 100], 00:19:38.873 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 116], 00:19:38.873 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 127], 95.00th=[ 131], 00:19:38.873 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 174], 00:19:38.873 | 99.99th=[ 190] 00:19:38.873 bw ( KiB/s): min=17176, max=17176, per=23.32%, avg=17176.00, stdev= 0.00, samples=1 00:19:38.873 iops : min= 4294, max= 4294, avg=4294.00, stdev= 0.00, samples=1 00:19:38.873 lat (usec) : 100=18.26%, 250=81.74% 00:19:38.873 cpu : usr=6.50%, sys=11.70%, ctx=7995, majf=0, minf=1 00:19:38.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.873 issued rwts: total=3899,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.873 job1: (groupid=0, jobs=1): err= 0: pid=2725045: Sat Nov 9 17:28:58 2024 00:19:38.873 read: IOPS=4984, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1001msec) 00:19:38.873 slat (nsec): min=8080, max=35764, avg=9862.11, stdev=2539.33 00:19:38.873 clat (usec): min=59, max=693, avg=87.53, stdev=17.96 00:19:38.873 lat (usec): min=74, max=703, avg=97.39, stdev=18.25 00:19:38.873 clat percentiles (usec): 00:19:38.873 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:19:38.873 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:19:38.873 | 70.00th=[ 88], 80.00th=[ 94], 90.00th=[ 116], 95.00th=[ 123], 00:19:38.873 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 163], 99.95th=[ 174], 00:19:38.873 | 99.99th=[ 693] 00:19:38.873 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:38.873 slat (nsec): min=8312, max=52047, avg=12545.61, stdev=3530.74 00:19:38.873 clat (usec): min=56, max=168, avg=82.34, stdev=13.25 00:19:38.873 lat (usec): min=69, max=185, avg=94.88, stdev=13.76 00:19:38.873 clat percentiles (usec): 00:19:38.873 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:19:38.873 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 82], 00:19:38.873 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 113], 00:19:38.873 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 143], 99.95th=[ 151], 00:19:38.873 | 99.99th=[ 169] 00:19:38.873 bw ( KiB/s): min=20480, max=20480, per=27.81%, avg=20480.00, stdev= 0.00, samples=1 00:19:38.873 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:38.873 lat (usec) : 100=86.06%, 250=13.92%, 500=0.01%, 750=0.01% 00:19:38.873 cpu : usr=7.30%, sys=14.40%, ctx=10109, majf=0, minf=1 00:19:38.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 issued rwts: total=4989,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.874 job2: (groupid=0, jobs=1): err= 0: pid=2725046: Sat Nov 9 17:28:58 2024 00:19:38.874 read: IOPS=3623, BW=14.2MiB/s (14.8MB/s)(14.2MiB/1001msec) 00:19:38.874 slat (nsec): min=8522, max=24708, avg=9427.97, stdev=1162.64 00:19:38.874 clat (usec): min=77, max=175, avg=119.52, stdev=10.74 00:19:38.874 lat (usec): min=86, max=184, avg=128.95, stdev=10.72 00:19:38.874 clat percentiles (usec): 00:19:38.874 | 1.00th=[ 95], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:19:38.874 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:19:38.874 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:19:38.874 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 176], 00:19:38.874 | 99.99th=[ 176] 00:19:38.874 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:38.874 slat (nsec): min=10470, max=38651, avg=11709.72, stdev=1185.53 00:19:38.874 clat (usec): min=73, max=168, avg=113.03, stdev=12.32 00:19:38.874 lat (usec): min=84, max=180, avg=124.73, stdev=12.40 00:19:38.874 clat percentiles (usec): 00:19:38.874 | 1.00th=[ 80], 5.00th=[ 89], 10.00th=[ 98], 20.00th=[ 105], 00:19:38.874 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 117], 00:19:38.874 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 131], 00:19:38.874 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 163], 99.95th=[ 165], 00:19:38.874 | 99.99th=[ 169] 00:19:38.874 bw ( KiB/s): min=16384, max=16384, per=22.24%, avg=16384.00, stdev= 0.00, samples=1 00:19:38.874 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:38.874 lat (usec) : 100=7.86%, 250=92.14% 00:19:38.874 cpu : usr=7.50%, sys=9.80%, ctx=7723, majf=0, minf=1 00:19:38.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 issued rwts: total=3627,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.874 job3: (groupid=0, jobs=1): err= 0: pid=2725047: Sat Nov 9 17:28:58 2024 00:19:38.874 read: IOPS=5062, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1001msec) 00:19:38.874 slat (nsec): min=8151, max=32388, avg=8895.03, stdev=1139.81 00:19:38.874 clat (usec): min=67, max=204, avg=86.90, stdev= 8.13 00:19:38.874 lat (usec): min=79, max=213, avg=95.79, stdev= 8.37 00:19:38.874 clat percentiles (usec): 00:19:38.874 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:19:38.874 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:19:38.874 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 101], 00:19:38.874 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 131], 00:19:38.874 | 99.99th=[ 204] 00:19:38.874 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:38.874 slat (nsec): min=10281, max=72389, avg=11375.58, stdev=1933.17 00:19:38.874 clat (usec): min=64, max=244, avg=84.03, stdev=10.49 00:19:38.874 lat (usec): min=75, max=257, avg=95.41, stdev=11.03 00:19:38.874 clat percentiles (usec): 00:19:38.874 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:19:38.874 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 84], 00:19:38.874 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 110], 00:19:38.874 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 137], 99.95th=[ 155], 00:19:38.874 | 99.99th=[ 245] 00:19:38.874 bw ( KiB/s): min=20480, max=20480, per=27.81%, avg=20480.00, stdev= 0.00, samples=1 00:19:38.874 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:38.874 lat (usec) : 100=93.07%, 250=6.93% 00:19:38.874 cpu : usr=7.70%, sys=13.60%, ctx=10189, majf=0, minf=1 00:19:38.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.874 issued rwts: total=5068,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.874 00:19:38.874 Run status group 0 (all jobs): 00:19:38.874 READ: bw=68.6MiB/s (71.9MB/s), 14.2MiB/s-19.8MiB/s (14.8MB/s-20.7MB/s), io=68.7MiB (72.0MB), run=1001-1001msec 00:19:38.874 WRITE: bw=71.9MiB/s (75.4MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=72.0MiB (75.5MB), run=1001-1001msec 00:19:38.874 00:19:38.874 Disk stats (read/write): 00:19:38.874 nvme0n1: ios=3121/3453, merge=0/0, ticks=337/360, in_queue=697, util=81.46% 00:19:38.874 nvme0n2: ios=3845/4096, merge=0/0, ticks=303/303, in_queue=606, util=82.82% 00:19:38.874 nvme0n3: ios=3072/3182, merge=0/0, ticks=347/335, in_queue=682, util=87.50% 00:19:38.874 nvme0n4: ios=3984/4096, merge=0/0, ticks=315/322, in_queue=637, util=89.15% 00:19:38.874 17:28:58 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:38.874 [global] 00:19:38.874 thread=1 00:19:38.874 invalidate=1 00:19:38.874 rw=write 00:19:38.874 time_based=1 00:19:38.874 runtime=1 00:19:38.874 ioengine=libaio 00:19:38.874 direct=1 00:19:38.874 bs=4096 00:19:38.874 iodepth=128 00:19:38.874 norandommap=0 00:19:38.874 numjobs=1 00:19:38.874 00:19:38.874 verify_dump=1 00:19:38.874 verify_backlog=512 00:19:38.874 verify_state_save=0 00:19:38.874 do_verify=1 00:19:38.874 verify=crc32c-intel 00:19:38.874 [job0] 00:19:38.874 filename=/dev/nvme0n1 00:19:38.874 [job1] 00:19:38.874 filename=/dev/nvme0n2 00:19:38.874 [job2] 00:19:38.874 filename=/dev/nvme0n3 00:19:38.874 [job3] 00:19:38.874 filename=/dev/nvme0n4 00:19:38.874 Could not set queue depth (nvme0n1) 00:19:38.874 Could not set queue depth (nvme0n2) 00:19:38.874 Could not set queue depth (nvme0n3) 00:19:38.874 Could not set queue depth (nvme0n4) 00:19:39.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:39.133 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:39.133 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:39.133 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:39.133 fio-3.35 00:19:39.133 Starting 4 threads 00:19:40.520 00:19:40.520 job0: (groupid=0, jobs=1): err= 0: pid=2725468: Sat Nov 9 17:29:00 2024 00:19:40.520 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:40.520 slat (nsec): min=1975, max=2548.0k, avg=138567.42, stdev=380314.75 00:19:40.520 clat (usec): min=10887, max=19782, avg=17789.04, stdev=1108.75 00:19:40.521 lat (usec): min=11624, max=20102, avg=17927.60, stdev=1050.91 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[13173], 5.00th=[15401], 10.00th=[16909], 20.00th=[17171], 00:19:40.521 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:19:40.521 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:19:40.521 | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:19:40.521 | 99.99th=[19792] 00:19:40.521 write: IOPS=3845, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1004msec); 0 zone resets 00:19:40.521 slat (usec): min=2, max=2399, avg=126.09, stdev=333.43 00:19:40.521 clat (usec): min=1898, max=18805, avg=16289.92, stdev=2045.88 00:19:40.521 lat (usec): min=2764, max=18808, avg=16416.01, stdev=2031.19 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 5604], 5.00th=[11731], 10.00th=[14484], 20.00th=[15926], 00:19:40.521 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:19:40.521 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:19:40.521 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:19:40.521 | 99.99th=[18744] 00:19:40.521 bw ( KiB/s): min=13952, max=15920, per=16.16%, avg=14936.00, stdev=1391.59, samples=2 00:19:40.521 iops : min= 3488, max= 3980, avg=3734.00, stdev=347.90, samples=2 00:19:40.521 lat (msec) : 2=0.01%, 4=0.20%, 10=0.86%, 20=98.93% 00:19:40.521 cpu : usr=1.99%, sys=4.49%, ctx=2340, majf=0, minf=1 00:19:40.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:40.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:40.521 issued rwts: total=3584,3861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:40.521 job1: (groupid=0, jobs=1): err= 0: pid=2725469: Sat Nov 9 17:29:00 2024 00:19:40.521 read: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec) 00:19:40.521 slat (nsec): min=1968, max=2326.6k, avg=50036.21, stdev=188201.65 00:19:40.521 clat (usec): min=256, max=18242, avg=6627.35, stdev=3477.59 00:19:40.521 lat (usec): min=319, max=18284, avg=6677.39, stdev=3502.18 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5145], 00:19:40.521 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:19:40.521 | 70.00th=[ 5669], 80.00th=[ 5866], 90.00th=[11469], 95.00th=[17171], 00:19:40.521 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:19:40.521 | 99.99th=[18220] 00:19:40.521 write: IOPS=9431, BW=36.8MiB/s (38.6MB/s)(37.0MiB/1003msec); 0 zone resets 00:19:40.521 slat (usec): min=2, max=2881, avg=52.96, stdev=190.01 00:19:40.521 clat (usec): min=1744, max=17548, avg=6919.61, stdev=4052.75 00:19:40.521 lat (usec): min=2712, max=17561, avg=6972.57, stdev=4080.17 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:19:40.521 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:19:40.521 | 70.00th=[ 5342], 80.00th=[ 5669], 90.00th=[16057], 95.00th=[16450], 00:19:40.521 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:19:40.521 | 99.99th=[17433] 00:19:40.521 bw ( KiB/s): min=29608, max=45056, per=40.39%, avg=37332.00, stdev=10923.39, samples=2 00:19:40.521 iops : min= 7402, max=11264, avg=9333.00, stdev=2730.85, samples=2 00:19:40.521 lat (usec) : 500=0.01%, 1000=0.01% 00:19:40.521 lat (msec) : 2=0.12%, 4=0.48%, 10=85.15%, 20=14.23% 00:19:40.521 cpu : usr=4.09%, sys=8.08%, ctx=1728, majf=0, minf=1 00:19:40.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:40.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:40.521 issued rwts: total=9216,9460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:40.521 job2: (groupid=0, jobs=1): err= 0: pid=2725470: Sat Nov 9 17:29:00 2024 00:19:40.521 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:19:40.521 slat (usec): min=2, max=1803, avg=138.72, stdev=327.32 00:19:40.521 clat (usec): min=10797, max=19770, avg=17810.17, stdev=1055.18 00:19:40.521 lat (usec): min=11482, max=19773, avg=17948.89, stdev=1012.39 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[13173], 5.00th=[15926], 10.00th=[16909], 20.00th=[17171], 00:19:40.521 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:19:40.521 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:19:40.521 | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:19:40.521 | 99.99th=[19792] 00:19:40.521 write: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1003msec); 0 zone resets 00:19:40.521 slat (usec): min=2, max=1654, avg=125.81, stdev=296.53 00:19:40.521 clat (usec): min=1867, max=18812, avg=16270.41, stdev=2075.53 00:19:40.521 lat (usec): min=2705, max=18816, avg=16396.22, stdev=2067.57 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 5604], 5.00th=[11600], 10.00th=[14615], 20.00th=[15926], 00:19:40.521 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:19:40.521 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:19:40.521 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:19:40.521 | 99.99th=[18744] 00:19:40.521 bw ( KiB/s): min=13960, max=15920, per=16.16%, avg=14940.00, stdev=1385.93, samples=2 00:19:40.521 iops : min= 3490, max= 3980, avg=3735.00, stdev=346.48, samples=2 00:19:40.521 lat (msec) : 2=0.01%, 4=0.21%, 10=0.86%, 20=98.91% 00:19:40.521 cpu : usr=1.90%, sys=4.99%, ctx=2368, majf=0, minf=1 00:19:40.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:40.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:40.521 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:40.521 job3: (groupid=0, jobs=1): err= 0: pid=2725471: Sat Nov 9 17:29:00 2024 00:19:40.521 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:19:40.521 slat (usec): min=2, max=3392, avg=87.29, stdev=372.62 00:19:40.521 clat (usec): min=4005, max=19929, avg=11312.50, stdev=5661.17 00:19:40.521 lat (usec): min=4141, max=19946, avg=11399.79, stdev=5697.38 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:19:40.521 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[14615], 00:19:40.521 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18482], 95.00th=[18744], 00:19:40.521 | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:19:40.521 | 99.99th=[20055] 00:19:40.521 write: IOPS=6000, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1003msec); 0 zone resets 00:19:40.521 slat (usec): min=2, max=3098, avg=79.84, stdev=332.11 00:19:40.521 clat (usec): min=2183, max=19682, avg=10509.24, stdev=5182.70 00:19:40.521 lat (usec): min=2192, max=19686, avg=10589.09, stdev=5212.33 00:19:40.521 clat percentiles (usec): 00:19:40.521 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:19:40.521 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 9765], 00:19:40.521 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:19:40.521 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19792], 99.95th=[19792], 00:19:40.521 | 99.99th=[19792] 00:19:40.521 bw ( KiB/s): min=18992, max=28136, per=25.49%, avg=23564.00, stdev=6465.78, samples=2 00:19:40.521 iops : min= 4748, max= 7034, avg=5891.00, stdev=1616.45, samples=2 00:19:40.521 lat (msec) : 4=0.16%, 10=59.44%, 20=40.39% 00:19:40.521 cpu : usr=3.29%, sys=6.19%, ctx=2224, majf=0, minf=2 00:19:40.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:40.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:40.521 issued rwts: total=5632,6018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:40.521 00:19:40.521 Run status group 0 (all jobs): 00:19:40.521 READ: bw=85.7MiB/s (89.8MB/s), 13.9MiB/s-35.9MiB/s (14.6MB/s-37.6MB/s), io=86.0MiB (90.2MB), run=1003-1004msec 00:19:40.521 WRITE: bw=90.3MiB/s (94.7MB/s), 15.0MiB/s-36.8MiB/s (15.8MB/s-38.6MB/s), io=90.6MiB (95.0MB), run=1003-1004msec 00:19:40.521 00:19:40.521 Disk stats (read/write): 00:19:40.521 nvme0n1: ios=2928/3072, merge=0/0, ticks=13078/12848, in_queue=25926, util=82.87% 00:19:40.521 nvme0n2: ios=8435/8704, merge=0/0, ticks=27603/26765, in_queue=54368, util=84.00% 00:19:40.521 nvme0n3: ios=2873/3072, merge=0/0, ticks=13050/12846, in_queue=25896, util=87.97% 00:19:40.521 nvme0n4: ios=4096/4139, merge=0/0, ticks=13911/12709, in_queue=26620, util=89.22% 00:19:40.521 17:29:00 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:40.521 [global] 00:19:40.521 thread=1 00:19:40.521 invalidate=1 00:19:40.521 rw=randwrite 00:19:40.521 time_based=1 00:19:40.521 runtime=1 00:19:40.521 ioengine=libaio 00:19:40.521 direct=1 00:19:40.521 bs=4096 00:19:40.521 iodepth=128 00:19:40.521 norandommap=0 00:19:40.521 numjobs=1 00:19:40.521 00:19:40.521 verify_dump=1 00:19:40.521 verify_backlog=512 00:19:40.521 verify_state_save=0 00:19:40.521 do_verify=1 00:19:40.521 verify=crc32c-intel 00:19:40.521 [job0] 00:19:40.521 filename=/dev/nvme0n1 00:19:40.521 [job1] 00:19:40.521 filename=/dev/nvme0n2 00:19:40.521 [job2] 00:19:40.521 filename=/dev/nvme0n3 00:19:40.521 [job3] 00:19:40.521 filename=/dev/nvme0n4 00:19:40.521 Could not set queue depth (nvme0n1) 00:19:40.521 Could not set queue depth (nvme0n2) 00:19:40.521 Could not set queue depth (nvme0n3) 00:19:40.521 Could not set queue depth (nvme0n4) 00:19:40.781 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.781 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.781 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.781 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.781 fio-3.35 00:19:40.781 Starting 4 threads 00:19:42.159 00:19:42.159 job0: (groupid=0, jobs=1): err= 0: pid=2725902: Sat Nov 9 17:29:01 2024 00:19:42.160 read: IOPS=4832, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1003msec) 00:19:42.160 slat (usec): min=2, max=1022, avg=101.93, stdev=259.09 00:19:42.160 clat (usec): min=2129, max=15593, avg=12993.93, stdev=1209.23 00:19:42.160 lat (usec): min=3023, max=15600, avg=13095.86, stdev=1186.00 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 7898], 5.00th=[10683], 10.00th=[11994], 20.00th=[12649], 00:19:42.160 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:19:42.160 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:19:42.160 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14746], 99.95th=[14746], 00:19:42.160 | 99.99th=[15533] 00:19:42.160 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:42.160 slat (usec): min=2, max=1189, avg=95.18, stdev=242.66 00:19:42.160 clat (usec): min=9282, max=13724, avg=12451.27, stdev=777.37 00:19:42.160 lat (usec): min=9880, max=14027, avg=12546.45, stdev=744.37 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:19:42.160 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:42.160 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:19:42.160 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13698], 99.95th=[13698], 00:19:42.160 | 99.99th=[13698] 00:19:42.160 bw ( KiB/s): min=20480, max=20480, per=19.91%, avg=20480.00, stdev= 0.00, samples=2 00:19:42.160 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:42.160 lat (msec) : 4=0.16%, 10=1.95%, 20=97.89% 00:19:42.160 cpu : usr=2.30%, sys=4.89%, ctx=1526, majf=0, minf=1 00:19:42.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:42.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.160 issued rwts: total=4847,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.160 job1: (groupid=0, jobs=1): err= 0: pid=2725903: Sat Nov 9 17:29:01 2024 00:19:42.160 read: IOPS=4837, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:19:42.160 slat (usec): min=2, max=1027, avg=101.67, stdev=258.31 00:19:42.160 clat (usec): min=2150, max=15615, avg=12993.99, stdev=1225.82 00:19:42.160 lat (usec): min=3014, max=15623, avg=13095.66, stdev=1204.35 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 6587], 5.00th=[10683], 10.00th=[12125], 20.00th=[12649], 00:19:42.160 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:19:42.160 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:19:42.160 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14877], 99.95th=[14877], 00:19:42.160 | 99.99th=[15664] 00:19:42.160 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:42.160 slat (usec): min=2, max=1190, avg=95.26, stdev=242.53 00:19:42.160 clat (usec): min=9283, max=14046, avg=12443.83, stdev=776.86 00:19:42.160 lat (usec): min=9871, max=14235, avg=12539.09, stdev=743.60 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:19:42.160 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:42.160 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:19:42.160 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13698], 99.95th=[13698], 00:19:42.160 | 99.99th=[14091] 00:19:42.160 bw ( KiB/s): min=20480, max=20480, per=19.91%, avg=20480.00, stdev= 0.00, samples=2 00:19:42.160 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:42.160 lat (msec) : 4=0.18%, 10=2.11%, 20=97.71% 00:19:42.160 cpu : usr=2.30%, sys=4.99%, ctx=1481, majf=0, minf=1 00:19:42.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:42.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.160 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.160 job2: (groupid=0, jobs=1): err= 0: pid=2725904: Sat Nov 9 17:29:01 2024 00:19:42.160 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:19:42.160 slat (usec): min=2, max=2920, avg=67.57, stdev=252.75 00:19:42.160 clat (usec): min=6605, max=16823, avg=8893.87, stdev=1866.19 00:19:42.160 lat (usec): min=6801, max=16833, avg=8961.45, stdev=1875.29 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7701], 20.00th=[ 7898], 00:19:42.160 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8160], 60.00th=[ 8291], 00:19:42.160 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[12649], 95.00th=[13435], 00:19:42.160 | 99.00th=[14615], 99.50th=[15795], 99.90th=[15926], 99.95th=[16057], 00:19:42.160 | 99.99th=[16909] 00:19:42.160 write: IOPS=7339, BW=28.7MiB/s (30.1MB/s)(28.8MiB/1003msec); 0 zone resets 00:19:42.160 slat (usec): min=2, max=2761, avg=66.18, stdev=240.07 00:19:42.160 clat (usec): min=2124, max=15686, avg=8595.25, stdev=2026.84 00:19:42.160 lat (usec): min=3023, max=15691, avg=8661.43, stdev=2036.52 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7570], 00:19:42.160 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7832], 60.00th=[ 7898], 00:19:42.160 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[12649], 95.00th=[13435], 00:19:42.160 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:19:42.160 | 99.99th=[15664] 00:19:42.160 bw ( KiB/s): min=26640, max=31240, per=28.13%, avg=28940.00, stdev=3252.69, samples=2 00:19:42.160 iops : min= 6660, max= 7810, avg=7235.00, stdev=813.17, samples=2 00:19:42.160 lat (msec) : 4=0.23%, 10=83.20%, 20=16.57% 00:19:42.160 cpu : usr=3.59%, sys=6.39%, ctx=1025, majf=0, minf=1 00:19:42.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:42.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.160 issued rwts: total=7168,7362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.160 job3: (groupid=0, jobs=1): err= 0: pid=2725905: Sat Nov 9 17:29:01 2024 00:19:42.160 read: IOPS=8136, BW=31.8MiB/s (33.3MB/s)(31.8MiB/1001msec) 00:19:42.160 slat (nsec): min=1908, max=3329.7k, avg=55823.35, stdev=221507.86 00:19:42.160 clat (usec): min=504, max=14399, avg=8179.75, stdev=1692.52 00:19:42.160 lat (usec): min=1066, max=15157, avg=8235.58, stdev=1702.54 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 2769], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 7635], 00:19:42.160 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8356], 00:19:42.160 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[11994], 00:19:42.160 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[14222], 00:19:42.160 | 99.99th=[14353] 00:19:42.160 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:19:42.160 slat (usec): min=2, max=3910, avg=51.30, stdev=205.60 00:19:42.160 clat (usec): min=558, max=13420, avg=7384.42, stdev=1449.46 00:19:42.160 lat (usec): min=934, max=13423, avg=7435.72, stdev=1460.34 00:19:42.160 clat percentiles (usec): 00:19:42.160 | 1.00th=[ 2606], 5.00th=[ 4228], 10.00th=[ 5145], 20.00th=[ 6915], 00:19:42.160 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:19:42.160 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8717], 00:19:42.160 | 99.00th=[10683], 99.50th=[11600], 99.90th=[12649], 99.95th=[12780], 00:19:42.160 | 99.99th=[13435] 00:19:42.160 bw ( KiB/s): min=33960, max=33960, per=33.01%, avg=33960.00, stdev= 0.00, samples=1 00:19:42.160 iops : min= 8490, max= 8490, avg=8490.00, stdev= 0.00, samples=1 00:19:42.160 lat (usec) : 750=0.02%, 1000=0.06% 00:19:42.160 lat (msec) : 2=0.46%, 4=2.85%, 10=92.13%, 20=4.47% 00:19:42.160 cpu : usr=4.10%, sys=8.70%, ctx=1059, majf=0, minf=1 00:19:42.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:42.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.160 issued rwts: total=8145,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.160 00:19:42.160 Run status group 0 (all jobs): 00:19:42.160 READ: bw=97.4MiB/s (102MB/s), 18.9MiB/s-31.8MiB/s (19.8MB/s-33.3MB/s), io=97.7MiB (102MB), run=1001-1003msec 00:19:42.160 WRITE: bw=100MiB/s (105MB/s), 19.9MiB/s-32.0MiB/s (20.9MB/s-33.5MB/s), io=101MiB (106MB), run=1001-1003msec 00:19:42.160 00:19:42.160 Disk stats (read/write): 00:19:42.160 nvme0n1: ios=4145/4223, merge=0/0, ticks=13472/12977, in_queue=26449, util=84.47% 00:19:42.160 nvme0n2: ios=4096/4226, merge=0/0, ticks=13455/12990, in_queue=26445, util=85.20% 00:19:42.160 nvme0n3: ios=5649/6144, merge=0/0, ticks=25344/26366, in_queue=51710, util=88.45% 00:19:42.160 nvme0n4: ios=6656/7026, merge=0/0, ticks=30461/29642, in_queue=60103, util=89.50% 00:19:42.160 17:29:01 -- target/fio.sh@55 -- # sync 00:19:42.160 17:29:01 -- target/fio.sh@59 -- # fio_pid=2726171 00:19:42.160 17:29:01 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:42.160 17:29:01 -- target/fio.sh@61 -- # sleep 3 00:19:42.160 [global] 00:19:42.160 thread=1 00:19:42.160 invalidate=1 00:19:42.160 rw=read 00:19:42.160 time_based=1 00:19:42.160 runtime=10 00:19:42.160 ioengine=libaio 00:19:42.160 direct=1 00:19:42.160 bs=4096 00:19:42.160 iodepth=1 00:19:42.160 norandommap=1 00:19:42.160 numjobs=1 00:19:42.160 00:19:42.160 [job0] 00:19:42.160 filename=/dev/nvme0n1 00:19:42.160 [job1] 00:19:42.160 filename=/dev/nvme0n2 00:19:42.160 [job2] 00:19:42.160 filename=/dev/nvme0n3 00:19:42.160 [job3] 00:19:42.160 filename=/dev/nvme0n4 00:19:42.160 Could not set queue depth (nvme0n1) 00:19:42.160 Could not set queue depth (nvme0n2) 00:19:42.160 Could not set queue depth (nvme0n3) 00:19:42.160 Could not set queue depth (nvme0n4) 00:19:42.419 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.419 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.419 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.419 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:42.419 fio-3.35 00:19:42.419 Starting 4 threads 00:19:44.954 17:29:04 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:45.213 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=84111360, buflen=4096 00:19:45.213 fio: pid=2726336, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:45.213 17:29:04 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:45.473 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=81891328, buflen=4096 00:19:45.473 fio: pid=2726335, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:45.473 17:29:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:45.473 17:29:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:45.732 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=33927168, buflen=4096 00:19:45.732 fio: pid=2726328, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:45.732 17:29:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:45.732 17:29:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:45.732 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59486208, buflen=4096 00:19:45.732 fio: pid=2726334, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:45.992 17:29:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:45.992 17:29:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:45.992 00:19:45.992 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2726328: Sat Nov 9 17:29:05 2024 00:19:45.992 read: IOPS=8198, BW=32.0MiB/s (33.6MB/s)(96.4MiB/3009msec) 00:19:45.992 slat (usec): min=3, max=15891, avg=10.56, stdev=145.33 00:19:45.992 clat (usec): min=49, max=371, avg=109.18, stdev=23.14 00:19:45.992 lat (usec): min=52, max=15969, avg=119.74, stdev=146.93 00:19:45.992 clat percentiles (usec): 00:19:45.992 | 1.00th=[ 57], 5.00th=[ 70], 10.00th=[ 75], 20.00th=[ 96], 00:19:45.992 | 30.00th=[ 104], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 114], 00:19:45.992 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 149], 00:19:45.992 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 206], 00:19:45.992 | 99.99th=[ 239] 00:19:45.992 bw ( KiB/s): min=28552, max=33472, per=26.45%, avg=31552.00, stdev=2553.28, samples=5 00:19:45.992 iops : min= 7138, max= 8368, avg=7888.00, stdev=638.32, samples=5 00:19:45.992 lat (usec) : 50=0.02%, 100=23.36%, 250=76.61%, 500=0.01% 00:19:45.992 cpu : usr=3.32%, sys=11.77%, ctx=24673, majf=0, minf=1 00:19:45.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 issued rwts: total=24668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.992 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2726334: Sat Nov 9 17:29:05 2024 00:19:45.992 read: IOPS=9592, BW=37.5MiB/s (39.3MB/s)(121MiB/3222msec) 00:19:45.992 slat (usec): min=8, max=15863, avg=10.74, stdev=151.28 00:19:45.992 clat (usec): min=38, max=22293, avg=91.35, stdev=127.93 00:19:45.992 lat (usec): min=57, max=22302, avg=102.09, stdev=197.98 00:19:45.992 clat percentiles (usec): 00:19:45.992 | 1.00th=[ 54], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 73], 00:19:45.992 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 103], 00:19:45.992 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 119], 00:19:45.992 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 151], 99.95th=[ 153], 00:19:45.992 | 99.99th=[ 161] 00:19:45.992 bw ( KiB/s): min=33360, max=45696, per=31.79%, avg=37929.50, stdev=5415.65, samples=6 00:19:45.992 iops : min= 8340, max=11424, avg=9482.33, stdev=1353.90, samples=6 00:19:45.992 lat (usec) : 50=0.02%, 100=56.22%, 250=43.75% 00:19:45.992 lat (msec) : 50=0.01% 00:19:45.992 cpu : usr=4.69%, sys=13.13%, ctx=30916, majf=0, minf=2 00:19:45.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 issued rwts: total=30908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.992 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2726335: Sat Nov 9 17:29:05 2024 00:19:45.992 read: IOPS=7080, BW=27.7MiB/s (29.0MB/s)(78.1MiB/2824msec) 00:19:45.992 slat (usec): min=8, max=15921, avg=12.04, stdev=158.45 00:19:45.992 clat (usec): min=62, max=22554, avg=126.47, stdev=159.69 00:19:45.992 lat (usec): min=76, max=22563, avg=138.51, stdev=224.79 00:19:45.992 clat percentiles (usec): 00:19:45.992 | 1.00th=[ 77], 5.00th=[ 89], 10.00th=[ 111], 20.00th=[ 118], 00:19:45.992 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:19:45.992 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 157], 00:19:45.992 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 210], 00:19:45.992 | 99.99th=[ 400] 00:19:45.992 bw ( KiB/s): min=25760, max=29472, per=23.95%, avg=28579.20, stdev=1590.54, samples=5 00:19:45.992 iops : min= 6440, max= 7368, avg=7144.80, stdev=397.63, samples=5 00:19:45.992 lat (usec) : 100=7.13%, 250=92.84%, 500=0.02% 00:19:45.992 lat (msec) : 50=0.01% 00:19:45.992 cpu : usr=2.87%, sys=10.06%, ctx=19998, majf=0, minf=2 00:19:45.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 issued rwts: total=19994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.992 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2726336: Sat Nov 9 17:29:05 2024 00:19:45.992 read: IOPS=7778, BW=30.4MiB/s (31.9MB/s)(80.2MiB/2640msec) 00:19:45.992 slat (nsec): min=8301, max=66217, avg=9077.50, stdev=1161.32 00:19:45.992 clat (usec): min=70, max=356, avg=117.03, stdev=22.21 00:19:45.992 lat (usec): min=79, max=365, avg=126.11, stdev=22.31 00:19:45.992 clat percentiles (usec): 00:19:45.992 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 91], 00:19:45.992 | 30.00th=[ 105], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:19:45.992 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 151], 00:19:45.992 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 202], 00:19:45.992 | 99.99th=[ 249] 00:19:45.992 bw ( KiB/s): min=28848, max=40512, per=26.42%, avg=31520.00, stdev=5033.03, samples=5 00:19:45.992 iops : min= 7212, max=10128, avg=7880.00, stdev=1258.26, samples=5 00:19:45.992 lat (usec) : 100=28.84%, 250=71.14%, 500=0.01% 00:19:45.992 cpu : usr=3.41%, sys=11.22%, ctx=20538, majf=0, minf=2 00:19:45.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.992 issued rwts: total=20536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.992 00:19:45.992 Run status group 0 (all jobs): 00:19:45.992 READ: bw=117MiB/s (122MB/s), 27.7MiB/s-37.5MiB/s (29.0MB/s-39.3MB/s), io=375MiB (394MB), run=2640-3222msec 00:19:45.992 00:19:45.992 Disk stats (read/write): 00:19:45.992 nvme0n1: ios=22911/0, merge=0/0, ticks=2392/0, in_queue=2392, util=93.72% 00:19:45.992 nvme0n2: ios=29242/0, merge=0/0, ticks=2482/0, in_queue=2482, util=93.90% 00:19:45.992 nvme0n3: ios=18473/0, merge=0/0, ticks=2169/0, in_queue=2169, util=96.10% 00:19:45.993 nvme0n4: ios=20312/0, merge=0/0, ticks=2220/0, in_queue=2220, util=96.46% 00:19:45.993 17:29:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:45.993 17:29:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:46.252 17:29:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:46.252 17:29:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:46.512 17:29:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:46.512 17:29:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:46.772 17:29:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:46.772 17:29:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:47.031 17:29:06 -- target/fio.sh@69 -- # fio_status=0 00:19:47.031 17:29:06 -- target/fio.sh@70 -- # wait 2726171 00:19:47.031 17:29:06 -- target/fio.sh@70 -- # fio_status=4 00:19:47.031 17:29:06 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:47.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.967 17:29:07 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:47.967 17:29:07 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.967 17:29:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.967 17:29:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.967 17:29:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.967 17:29:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.967 17:29:07 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.967 17:29:07 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:47.967 17:29:07 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:47.967 nvmf hotplug test: fio failed as expected 00:19:47.967 17:29:07 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.967 17:29:07 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:47.967 17:29:07 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:47.967 17:29:07 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:47.967 17:29:07 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:47.967 17:29:07 -- target/fio.sh@91 -- # nvmftestfini 00:19:47.967 17:29:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.967 17:29:07 -- nvmf/common.sh@116 -- # sync 00:19:47.967 17:29:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:47.967 17:29:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:47.967 17:29:07 -- nvmf/common.sh@119 -- # set +e 00:19:47.967 17:29:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.967 17:29:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:47.967 rmmod nvme_rdma 00:19:47.967 rmmod nvme_fabrics 00:19:48.226 17:29:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:48.226 17:29:07 -- nvmf/common.sh@123 -- # set -e 00:19:48.226 17:29:07 -- nvmf/common.sh@124 -- # return 0 00:19:48.226 17:29:07 -- nvmf/common.sh@477 -- # '[' -n 2723061 ']' 00:19:48.226 17:29:07 -- nvmf/common.sh@478 -- # killprocess 2723061 00:19:48.226 17:29:07 -- common/autotest_common.sh@936 -- # '[' -z 2723061 ']' 00:19:48.226 17:29:07 -- common/autotest_common.sh@940 -- # kill -0 2723061 00:19:48.226 17:29:07 -- common/autotest_common.sh@941 -- # uname 00:19:48.226 17:29:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:48.226 17:29:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2723061 00:19:48.226 17:29:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:48.226 17:29:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:48.226 17:29:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2723061' 00:19:48.226 killing process with pid 2723061 00:19:48.226 17:29:07 -- common/autotest_common.sh@955 -- # kill 2723061 00:19:48.226 17:29:07 -- common/autotest_common.sh@960 -- # wait 2723061 00:19:48.485 17:29:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.485 17:29:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:48.485 00:19:48.485 real 0m27.018s 00:19:48.485 user 2m10.896s 00:19:48.485 sys 0m10.252s 00:19:48.485 17:29:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:48.485 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:19:48.485 ************************************ 00:19:48.485 END TEST nvmf_fio_target 00:19:48.485 ************************************ 00:19:48.485 17:29:08 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:48.485 17:29:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:48.485 17:29:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.485 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:19:48.485 ************************************ 00:19:48.485 START TEST nvmf_bdevio 00:19:48.485 ************************************ 00:19:48.485 17:29:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:48.485 * Looking for test storage... 00:19:48.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:48.745 17:29:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:48.745 17:29:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:48.745 17:29:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:48.745 17:29:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:48.745 17:29:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:48.745 17:29:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:48.745 17:29:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:48.745 17:29:08 -- scripts/common.sh@335 -- # IFS=.-: 00:19:48.745 17:29:08 -- scripts/common.sh@335 -- # read -ra ver1 00:19:48.745 17:29:08 -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.745 17:29:08 -- scripts/common.sh@336 -- # read -ra ver2 00:19:48.745 17:29:08 -- scripts/common.sh@337 -- # local 'op=<' 00:19:48.745 17:29:08 -- scripts/common.sh@339 -- # ver1_l=2 00:19:48.745 17:29:08 -- scripts/common.sh@340 -- # ver2_l=1 00:19:48.745 17:29:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:48.745 17:29:08 -- scripts/common.sh@343 -- # case "$op" in 00:19:48.745 17:29:08 -- scripts/common.sh@344 -- # : 1 00:19:48.745 17:29:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:48.745 17:29:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.745 17:29:08 -- scripts/common.sh@364 -- # decimal 1 00:19:48.745 17:29:08 -- scripts/common.sh@352 -- # local d=1 00:19:48.745 17:29:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.745 17:29:08 -- scripts/common.sh@354 -- # echo 1 00:19:48.745 17:29:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:48.745 17:29:08 -- scripts/common.sh@365 -- # decimal 2 00:19:48.745 17:29:08 -- scripts/common.sh@352 -- # local d=2 00:19:48.745 17:29:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.745 17:29:08 -- scripts/common.sh@354 -- # echo 2 00:19:48.745 17:29:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:48.745 17:29:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:48.745 17:29:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:48.745 17:29:08 -- scripts/common.sh@367 -- # return 0 00:19:48.745 17:29:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.745 17:29:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:48.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.745 --rc genhtml_branch_coverage=1 00:19:48.745 --rc genhtml_function_coverage=1 00:19:48.745 --rc genhtml_legend=1 00:19:48.745 --rc geninfo_all_blocks=1 00:19:48.745 --rc geninfo_unexecuted_blocks=1 00:19:48.745 00:19:48.745 ' 00:19:48.745 17:29:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:48.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.745 --rc genhtml_branch_coverage=1 00:19:48.745 --rc genhtml_function_coverage=1 00:19:48.745 --rc genhtml_legend=1 00:19:48.745 --rc geninfo_all_blocks=1 00:19:48.745 --rc geninfo_unexecuted_blocks=1 00:19:48.745 00:19:48.745 ' 00:19:48.745 17:29:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:48.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.745 --rc genhtml_branch_coverage=1 00:19:48.745 --rc genhtml_function_coverage=1 00:19:48.745 --rc genhtml_legend=1 00:19:48.745 --rc geninfo_all_blocks=1 00:19:48.745 --rc geninfo_unexecuted_blocks=1 00:19:48.745 00:19:48.745 ' 00:19:48.745 17:29:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:48.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.745 --rc genhtml_branch_coverage=1 00:19:48.745 --rc genhtml_function_coverage=1 00:19:48.745 --rc genhtml_legend=1 00:19:48.745 --rc geninfo_all_blocks=1 00:19:48.745 --rc geninfo_unexecuted_blocks=1 00:19:48.745 00:19:48.745 ' 00:19:48.745 17:29:08 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.745 17:29:08 -- nvmf/common.sh@7 -- # uname -s 00:19:48.745 17:29:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.745 17:29:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.745 17:29:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.745 17:29:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.745 17:29:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.745 17:29:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.745 17:29:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.745 17:29:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.745 17:29:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.745 17:29:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.745 17:29:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:48.745 17:29:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:48.745 17:29:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.745 17:29:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.745 17:29:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.745 17:29:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:48.745 17:29:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.745 17:29:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.745 17:29:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.745 17:29:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.745 17:29:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.745 17:29:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.745 17:29:08 -- paths/export.sh@5 -- # export PATH 00:19:48.745 17:29:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.745 17:29:08 -- nvmf/common.sh@46 -- # : 0 00:19:48.745 17:29:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.745 17:29:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.745 17:29:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.745 17:29:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.745 17:29:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.745 17:29:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.745 17:29:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.745 17:29:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.746 17:29:08 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.746 17:29:08 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.746 17:29:08 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:48.746 17:29:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:48.746 17:29:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.746 17:29:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.746 17:29:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.746 17:29:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.746 17:29:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.746 17:29:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.746 17:29:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.746 17:29:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:48.746 17:29:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:48.746 17:29:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:48.746 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:19:55.313 17:29:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.313 17:29:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:55.313 17:29:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:55.313 17:29:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:55.313 17:29:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:55.313 17:29:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:55.313 17:29:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:55.313 17:29:14 -- nvmf/common.sh@294 -- # net_devs=() 00:19:55.313 17:29:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:55.313 17:29:14 -- nvmf/common.sh@295 -- # e810=() 00:19:55.313 17:29:14 -- nvmf/common.sh@295 -- # local -ga e810 00:19:55.313 17:29:14 -- nvmf/common.sh@296 -- # x722=() 00:19:55.313 17:29:14 -- nvmf/common.sh@296 -- # local -ga x722 00:19:55.314 17:29:14 -- nvmf/common.sh@297 -- # mlx=() 00:19:55.314 17:29:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:55.314 17:29:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.314 17:29:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:55.314 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:55.314 17:29:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.314 17:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:55.314 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:55.314 17:29:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.314 17:29:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.314 17:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.314 17:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:55.314 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.314 17:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.314 17:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:55.314 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.314 17:29:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:55.314 17:29:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:55.314 17:29:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:55.314 17:29:14 -- nvmf/common.sh@57 -- # uname 00:19:55.314 17:29:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:55.314 17:29:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:55.314 17:29:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:55.314 17:29:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:55.314 17:29:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:55.314 17:29:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:55.314 17:29:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:55.314 17:29:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:55.314 17:29:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:55.314 17:29:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.314 17:29:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:55.314 17:29:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.314 17:29:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:55.314 17:29:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:55.314 17:29:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.314 17:29:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@104 -- # continue 2 00:19:55.314 17:29:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@104 -- # continue 2 00:19:55.314 17:29:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:55.314 17:29:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.314 17:29:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:55.314 17:29:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:55.314 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.314 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:55.314 altname enp217s0f0np0 00:19:55.314 altname ens818f0np0 00:19:55.314 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.314 valid_lft forever preferred_lft forever 00:19:55.314 17:29:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:55.314 17:29:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.314 17:29:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:55.314 17:29:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:55.314 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.314 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:55.314 altname enp217s0f1np1 00:19:55.314 altname ens818f1np1 00:19:55.314 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.314 valid_lft forever preferred_lft forever 00:19:55.314 17:29:14 -- nvmf/common.sh@410 -- # return 0 00:19:55.314 17:29:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:55.314 17:29:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.314 17:29:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:55.314 17:29:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:55.314 17:29:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.314 17:29:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:55.314 17:29:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:55.314 17:29:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.314 17:29:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:55.314 17:29:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@104 -- # continue 2 00:19:55.314 17:29:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.314 17:29:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.314 17:29:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@104 -- # continue 2 00:19:55.314 17:29:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:55.314 17:29:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.314 17:29:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:55.314 17:29:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:55.314 17:29:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:55.314 17:29:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.314 192.168.100.9' 00:19:55.315 17:29:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:55.315 192.168.100.9' 00:19:55.315 17:29:14 -- nvmf/common.sh@445 -- # head -n 1 00:19:55.315 17:29:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.315 17:29:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:55.315 192.168.100.9' 00:19:55.315 17:29:14 -- nvmf/common.sh@446 -- # tail -n +2 00:19:55.315 17:29:14 -- nvmf/common.sh@446 -- # head -n 1 00:19:55.315 17:29:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.315 17:29:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:55.315 17:29:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.315 17:29:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:55.315 17:29:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:55.315 17:29:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:55.315 17:29:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:55.315 17:29:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:55.315 17:29:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.315 17:29:14 -- common/autotest_common.sh@10 -- # set +x 00:19:55.315 17:29:14 -- nvmf/common.sh@469 -- # nvmfpid=2730671 00:19:55.315 17:29:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:55.315 17:29:14 -- nvmf/common.sh@470 -- # waitforlisten 2730671 00:19:55.315 17:29:14 -- common/autotest_common.sh@829 -- # '[' -z 2730671 ']' 00:19:55.315 17:29:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.315 17:29:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.315 17:29:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.315 17:29:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.315 17:29:14 -- common/autotest_common.sh@10 -- # set +x 00:19:55.315 [2024-11-09 17:29:15.035776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:55.315 [2024-11-09 17:29:15.035827] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.315 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.572 [2024-11-09 17:29:15.106729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.572 [2024-11-09 17:29:15.180444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:55.572 [2024-11-09 17:29:15.180555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.572 [2024-11-09 17:29:15.180564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.572 [2024-11-09 17:29:15.180573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.572 [2024-11-09 17:29:15.180717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.572 [2024-11-09 17:29:15.180825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:55.572 [2024-11-09 17:29:15.180932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.572 [2024-11-09 17:29:15.180933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:56.139 17:29:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.139 17:29:15 -- common/autotest_common.sh@862 -- # return 0 00:19:56.139 17:29:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:56.139 17:29:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.139 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:19:56.139 17:29:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.139 17:29:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:56.139 17:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.139 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:19:56.398 [2024-11-09 17:29:15.924791] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x250f970/0x2513e60) succeed. 00:19:56.398 [2024-11-09 17:29:15.933898] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2510f60/0x2555500) succeed. 00:19:56.398 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.398 17:29:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:56.398 17:29:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.398 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:19:56.398 Malloc0 00:19:56.398 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.398 17:29:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:56.398 17:29:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.398 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:19:56.398 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.398 17:29:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:56.398 17:29:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.398 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:19:56.398 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.398 17:29:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:56.398 17:29:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.398 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:19:56.398 [2024-11-09 17:29:16.097413] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:56.398 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.398 17:29:16 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:56.398 17:29:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:56.398 17:29:16 -- nvmf/common.sh@520 -- # config=() 00:19:56.398 17:29:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:56.398 17:29:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:56.398 17:29:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:56.398 { 00:19:56.398 "params": { 00:19:56.398 "name": "Nvme$subsystem", 00:19:56.398 "trtype": "$TEST_TRANSPORT", 00:19:56.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.398 "adrfam": "ipv4", 00:19:56.398 "trsvcid": "$NVMF_PORT", 00:19:56.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.398 "hdgst": ${hdgst:-false}, 00:19:56.398 "ddgst": ${ddgst:-false} 00:19:56.398 }, 00:19:56.398 "method": "bdev_nvme_attach_controller" 00:19:56.398 } 00:19:56.398 EOF 00:19:56.398 )") 00:19:56.398 17:29:16 -- nvmf/common.sh@542 -- # cat 00:19:56.398 17:29:16 -- nvmf/common.sh@544 -- # jq . 00:19:56.398 17:29:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:56.398 17:29:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:56.398 "params": { 00:19:56.398 "name": "Nvme1", 00:19:56.398 "trtype": "rdma", 00:19:56.398 "traddr": "192.168.100.8", 00:19:56.398 "adrfam": "ipv4", 00:19:56.398 "trsvcid": "4420", 00:19:56.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.398 "hdgst": false, 00:19:56.398 "ddgst": false 00:19:56.398 }, 00:19:56.398 "method": "bdev_nvme_attach_controller" 00:19:56.398 }' 00:19:56.398 [2024-11-09 17:29:16.148356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:56.398 [2024-11-09 17:29:16.148408] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730901 ] 00:19:56.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.658 [2024-11-09 17:29:16.218551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:56.658 [2024-11-09 17:29:16.289740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.658 [2024-11-09 17:29:16.289832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.658 [2024-11-09 17:29:16.289835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.917 [2024-11-09 17:29:16.458621] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:56.917 [2024-11-09 17:29:16.458653] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:56.917 I/O targets: 00:19:56.918 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:56.918 00:19:56.918 00:19:56.918 CUnit - A unit testing framework for C - Version 2.1-3 00:19:56.918 http://cunit.sourceforge.net/ 00:19:56.918 00:19:56.918 00:19:56.918 Suite: bdevio tests on: Nvme1n1 00:19:56.918 Test: blockdev write read block ...passed 00:19:56.918 Test: blockdev write zeroes read block ...passed 00:19:56.918 Test: blockdev write zeroes read no split ...passed 00:19:56.918 Test: blockdev write zeroes read split ...passed 00:19:56.918 Test: blockdev write zeroes read split partial ...passed 00:19:56.918 Test: blockdev reset ...[2024-11-09 17:29:16.488974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.918 [2024-11-09 17:29:16.511640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:56.918 [2024-11-09 17:29:16.538418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.918 passed 00:19:56.918 Test: blockdev write read 8 blocks ...passed 00:19:56.918 Test: blockdev write read size > 128k ...passed 00:19:56.918 Test: blockdev write read invalid size ...passed 00:19:56.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:56.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:56.918 Test: blockdev write read max offset ...passed 00:19:56.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:56.918 Test: blockdev writev readv 8 blocks ...passed 00:19:56.918 Test: blockdev writev readv 30 x 1block ...passed 00:19:56.918 Test: blockdev writev readv block ...passed 00:19:56.918 Test: blockdev writev readv size > 128k ...passed 00:19:56.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:56.918 Test: blockdev comparev and writev ...[2024-11-09 17:29:16.541320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.541973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.918 [2024-11-09 17:29:16.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.918 passed 00:19:56.918 Test: blockdev nvme passthru rw ...passed 00:19:56.918 Test: blockdev nvme passthru vendor specific ...[2024-11-09 17:29:16.542257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:56.918 [2024-11-09 17:29:16.542269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.542311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:56.918 [2024-11-09 17:29:16.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.542365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:56.918 [2024-11-09 17:29:16.542375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.918 [2024-11-09 17:29:16.542417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:56.918 [2024-11-09 17:29:16.542427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.918 passed 00:19:56.918 Test: blockdev nvme admin passthru ...passed 00:19:56.918 Test: blockdev copy ...passed 00:19:56.918 00:19:56.918 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.918 suites 1 1 n/a 0 0 00:19:56.918 tests 23 23 23 0 0 00:19:56.918 asserts 152 152 152 0 n/a 00:19:56.918 00:19:56.918 Elapsed time = 0.170 seconds 00:19:57.178 17:29:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.178 17:29:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.178 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:19:57.178 17:29:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.178 17:29:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:57.178 17:29:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:57.178 17:29:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:57.178 17:29:16 -- nvmf/common.sh@116 -- # sync 00:19:57.178 17:29:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:57.178 17:29:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:57.178 17:29:16 -- nvmf/common.sh@119 -- # set +e 00:19:57.178 17:29:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:57.178 17:29:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:57.178 rmmod nvme_rdma 00:19:57.178 rmmod nvme_fabrics 00:19:57.178 17:29:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:57.178 17:29:16 -- nvmf/common.sh@123 -- # set -e 00:19:57.178 17:29:16 -- nvmf/common.sh@124 -- # return 0 00:19:57.178 17:29:16 -- nvmf/common.sh@477 -- # '[' -n 2730671 ']' 00:19:57.178 17:29:16 -- nvmf/common.sh@478 -- # killprocess 2730671 00:19:57.178 17:29:16 -- common/autotest_common.sh@936 -- # '[' -z 2730671 ']' 00:19:57.178 17:29:16 -- common/autotest_common.sh@940 -- # kill -0 2730671 00:19:57.178 17:29:16 -- common/autotest_common.sh@941 -- # uname 00:19:57.178 17:29:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.178 17:29:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2730671 00:19:57.178 17:29:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:57.178 17:29:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:57.178 17:29:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2730671' 00:19:57.178 killing process with pid 2730671 00:19:57.178 17:29:16 -- common/autotest_common.sh@955 -- # kill 2730671 00:19:57.178 17:29:16 -- common/autotest_common.sh@960 -- # wait 2730671 00:19:57.437 17:29:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:57.437 17:29:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:57.437 00:19:57.437 real 0m9.017s 00:19:57.437 user 0m10.895s 00:19:57.437 sys 0m5.696s 00:19:57.437 17:29:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:57.437 17:29:17 -- common/autotest_common.sh@10 -- # set +x 00:19:57.437 ************************************ 00:19:57.437 END TEST nvmf_bdevio 00:19:57.437 ************************************ 00:19:57.697 17:29:17 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:19:57.697 17:29:17 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:19:57.698 17:29:17 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:19:57.698 17:29:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.698 17:29:17 -- common/autotest_common.sh@10 -- # set +x 00:19:57.698 ************************************ 00:19:57.698 START TEST nvmf_fuzz 00:19:57.698 ************************************ 00:19:57.698 17:29:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:19:57.698 * Looking for test storage... 00:19:57.698 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:57.698 17:29:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:57.698 17:29:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:57.698 17:29:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:57.698 17:29:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:57.698 17:29:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:57.698 17:29:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:57.698 17:29:17 -- scripts/common.sh@335 -- # IFS=.-: 00:19:57.698 17:29:17 -- scripts/common.sh@335 -- # read -ra ver1 00:19:57.698 17:29:17 -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.698 17:29:17 -- scripts/common.sh@336 -- # read -ra ver2 00:19:57.698 17:29:17 -- scripts/common.sh@337 -- # local 'op=<' 00:19:57.698 17:29:17 -- scripts/common.sh@339 -- # ver1_l=2 00:19:57.698 17:29:17 -- scripts/common.sh@340 -- # ver2_l=1 00:19:57.698 17:29:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:57.698 17:29:17 -- scripts/common.sh@343 -- # case "$op" in 00:19:57.698 17:29:17 -- scripts/common.sh@344 -- # : 1 00:19:57.698 17:29:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:57.698 17:29:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.698 17:29:17 -- scripts/common.sh@364 -- # decimal 1 00:19:57.698 17:29:17 -- scripts/common.sh@352 -- # local d=1 00:19:57.698 17:29:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.698 17:29:17 -- scripts/common.sh@354 -- # echo 1 00:19:57.698 17:29:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:57.698 17:29:17 -- scripts/common.sh@365 -- # decimal 2 00:19:57.698 17:29:17 -- scripts/common.sh@352 -- # local d=2 00:19:57.698 17:29:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.698 17:29:17 -- scripts/common.sh@354 -- # echo 2 00:19:57.698 17:29:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:57.698 17:29:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:57.698 17:29:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:57.698 17:29:17 -- scripts/common.sh@367 -- # return 0 00:19:57.698 17:29:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.698 --rc genhtml_branch_coverage=1 00:19:57.698 --rc genhtml_function_coverage=1 00:19:57.698 --rc genhtml_legend=1 00:19:57.698 --rc geninfo_all_blocks=1 00:19:57.698 --rc geninfo_unexecuted_blocks=1 00:19:57.698 00:19:57.698 ' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.698 --rc genhtml_branch_coverage=1 00:19:57.698 --rc genhtml_function_coverage=1 00:19:57.698 --rc genhtml_legend=1 00:19:57.698 --rc geninfo_all_blocks=1 00:19:57.698 --rc geninfo_unexecuted_blocks=1 00:19:57.698 00:19:57.698 ' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.698 --rc genhtml_branch_coverage=1 00:19:57.698 --rc genhtml_function_coverage=1 00:19:57.698 --rc genhtml_legend=1 00:19:57.698 --rc geninfo_all_blocks=1 00:19:57.698 --rc geninfo_unexecuted_blocks=1 00:19:57.698 00:19:57.698 ' 00:19:57.698 17:29:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.698 --rc genhtml_branch_coverage=1 00:19:57.698 --rc genhtml_function_coverage=1 00:19:57.698 --rc genhtml_legend=1 00:19:57.698 --rc geninfo_all_blocks=1 00:19:57.698 --rc geninfo_unexecuted_blocks=1 00:19:57.698 00:19:57.698 ' 00:19:57.698 17:29:17 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.698 17:29:17 -- nvmf/common.sh@7 -- # uname -s 00:19:57.698 17:29:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.698 17:29:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.698 17:29:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.698 17:29:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.698 17:29:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.698 17:29:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.698 17:29:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.698 17:29:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.698 17:29:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.698 17:29:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.698 17:29:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.698 17:29:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:57.698 17:29:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.698 17:29:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.698 17:29:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.698 17:29:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:57.698 17:29:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.698 17:29:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.698 17:29:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.698 17:29:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.698 17:29:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.698 17:29:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.698 17:29:17 -- paths/export.sh@5 -- # export PATH 00:19:57.698 17:29:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.698 17:29:17 -- nvmf/common.sh@46 -- # : 0 00:19:57.698 17:29:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:57.698 17:29:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:57.698 17:29:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:57.698 17:29:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.698 17:29:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.698 17:29:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:57.698 17:29:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:57.698 17:29:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:57.698 17:29:17 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:57.698 17:29:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:57.698 17:29:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.698 17:29:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:57.698 17:29:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:57.698 17:29:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:57.698 17:29:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.698 17:29:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.698 17:29:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.698 17:29:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:57.698 17:29:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:57.698 17:29:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:57.698 17:29:17 -- common/autotest_common.sh@10 -- # set +x 00:20:04.357 17:29:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:04.357 17:29:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:04.357 17:29:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:04.357 17:29:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:04.357 17:29:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:04.357 17:29:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:04.357 17:29:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:04.357 17:29:23 -- nvmf/common.sh@294 -- # net_devs=() 00:20:04.357 17:29:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:04.357 17:29:23 -- nvmf/common.sh@295 -- # e810=() 00:20:04.357 17:29:23 -- nvmf/common.sh@295 -- # local -ga e810 00:20:04.357 17:29:23 -- nvmf/common.sh@296 -- # x722=() 00:20:04.357 17:29:23 -- nvmf/common.sh@296 -- # local -ga x722 00:20:04.357 17:29:23 -- nvmf/common.sh@297 -- # mlx=() 00:20:04.357 17:29:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:04.357 17:29:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.357 17:29:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:04.357 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:04.357 17:29:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.357 17:29:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:04.357 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:04.357 17:29:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.357 17:29:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.357 17:29:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.357 17:29:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:04.357 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:04.357 17:29:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.357 17:29:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.357 17:29:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:04.357 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:04.357 17:29:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.357 17:29:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:04.357 17:29:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:04.357 17:29:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:04.357 17:29:23 -- nvmf/common.sh@57 -- # uname 00:20:04.357 17:29:23 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:04.357 17:29:23 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:04.357 17:29:23 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:04.357 17:29:23 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:04.357 17:29:23 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:04.357 17:29:23 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:04.357 17:29:23 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:04.357 17:29:23 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:04.357 17:29:23 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:04.357 17:29:23 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:04.357 17:29:23 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:04.357 17:29:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.357 17:29:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:04.357 17:29:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:04.357 17:29:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.357 17:29:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:04.357 17:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:04.357 17:29:23 -- nvmf/common.sh@104 -- # continue 2 00:20:04.357 17:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.357 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:04.357 17:29:23 -- nvmf/common.sh@104 -- # continue 2 00:20:04.357 17:29:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:04.357 17:29:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:04.357 17:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:04.357 17:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:04.357 17:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.357 17:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.357 17:29:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:04.357 17:29:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:04.357 17:29:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:04.357 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.357 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:04.357 altname enp217s0f0np0 00:20:04.357 altname ens818f0np0 00:20:04.357 inet 192.168.100.8/24 scope global mlx_0_0 00:20:04.358 valid_lft forever preferred_lft forever 00:20:04.358 17:29:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:04.358 17:29:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.358 17:29:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:04.358 17:29:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:04.358 17:29:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:04.358 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.358 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:04.358 altname enp217s0f1np1 00:20:04.358 altname ens818f1np1 00:20:04.358 inet 192.168.100.9/24 scope global mlx_0_1 00:20:04.358 valid_lft forever preferred_lft forever 00:20:04.358 17:29:23 -- nvmf/common.sh@410 -- # return 0 00:20:04.358 17:29:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:04.358 17:29:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:04.358 17:29:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:04.358 17:29:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:04.358 17:29:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:04.358 17:29:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.358 17:29:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:04.358 17:29:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:04.358 17:29:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.358 17:29:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:04.358 17:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.358 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.358 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.358 17:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:04.358 17:29:23 -- nvmf/common.sh@104 -- # continue 2 00:20:04.358 17:29:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:04.358 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.358 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.358 17:29:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.358 17:29:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.358 17:29:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@104 -- # continue 2 00:20:04.358 17:29:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:04.358 17:29:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:04.358 17:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.358 17:29:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:04.358 17:29:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:04.358 17:29:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:04.358 17:29:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:04.358 192.168.100.9' 00:20:04.358 17:29:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:04.358 192.168.100.9' 00:20:04.358 17:29:23 -- nvmf/common.sh@445 -- # head -n 1 00:20:04.358 17:29:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.358 17:29:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:04.358 192.168.100.9' 00:20:04.358 17:29:23 -- nvmf/common.sh@446 -- # tail -n +2 00:20:04.358 17:29:23 -- nvmf/common.sh@446 -- # head -n 1 00:20:04.358 17:29:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.358 17:29:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:04.358 17:29:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.358 17:29:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:04.358 17:29:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:04.358 17:29:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:04.358 17:29:23 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2734352 00:20:04.358 17:29:23 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:04.358 17:29:23 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:04.358 17:29:23 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2734352 00:20:04.358 17:29:23 -- common/autotest_common.sh@829 -- # '[' -z 2734352 ']' 00:20:04.358 17:29:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.358 17:29:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.358 17:29:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.358 17:29:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.358 17:29:23 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 17:29:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.926 17:29:24 -- common/autotest_common.sh@862 -- # return 0 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:04.926 17:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.926 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 17:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:04.926 17:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.926 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 Malloc0 00:20:04.926 17:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.926 17:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.926 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 17:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.926 17:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.926 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 17:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:04.926 17:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.926 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 17:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:04.926 17:29:24 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:20:37.046 Fuzzing completed. Shutting down the fuzz application 00:20:37.046 00:20:37.046 Dumping successful admin opcodes: 00:20:37.046 8, 9, 10, 24, 00:20:37.046 Dumping successful io opcodes: 00:20:37.046 0, 9, 00:20:37.046 NS: 0x200003af1f00 I/O qp, Total commands completed: 1004953, total successful commands: 5886, random_seed: 19914368 00:20:37.046 NS: 0x200003af1f00 admin qp, Total commands completed: 126624, total successful commands: 1031, random_seed: 2463629120 00:20:37.046 17:29:55 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:37.046 Fuzzing completed. Shutting down the fuzz application 00:20:37.046 00:20:37.046 Dumping successful admin opcodes: 00:20:37.046 24, 00:20:37.046 Dumping successful io opcodes: 00:20:37.046 00:20:37.046 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 407792550 00:20:37.046 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 407872960 00:20:37.046 17:29:56 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.046 17:29:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.046 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.046 17:29:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.046 17:29:56 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:37.046 17:29:56 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:37.046 17:29:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:37.046 17:29:56 -- nvmf/common.sh@116 -- # sync 00:20:37.046 17:29:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:37.046 17:29:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:37.046 17:29:56 -- nvmf/common.sh@119 -- # set +e 00:20:37.046 17:29:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:37.046 17:29:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:37.046 rmmod nvme_rdma 00:20:37.046 rmmod nvme_fabrics 00:20:37.046 17:29:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:37.046 17:29:56 -- nvmf/common.sh@123 -- # set -e 00:20:37.046 17:29:56 -- nvmf/common.sh@124 -- # return 0 00:20:37.046 17:29:56 -- nvmf/common.sh@477 -- # '[' -n 2734352 ']' 00:20:37.046 17:29:56 -- nvmf/common.sh@478 -- # killprocess 2734352 00:20:37.046 17:29:56 -- common/autotest_common.sh@936 -- # '[' -z 2734352 ']' 00:20:37.046 17:29:56 -- common/autotest_common.sh@940 -- # kill -0 2734352 00:20:37.046 17:29:56 -- common/autotest_common.sh@941 -- # uname 00:20:37.046 17:29:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.046 17:29:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2734352 00:20:37.046 17:29:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.046 17:29:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.046 17:29:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2734352' 00:20:37.046 killing process with pid 2734352 00:20:37.046 17:29:56 -- common/autotest_common.sh@955 -- # kill 2734352 00:20:37.046 17:29:56 -- common/autotest_common.sh@960 -- # wait 2734352 00:20:37.046 17:29:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:37.046 17:29:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:37.046 17:29:56 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:37.046 00:20:37.046 real 0m39.520s 00:20:37.046 user 0m50.098s 00:20:37.046 sys 0m21.134s 00:20:37.046 17:29:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:37.046 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.046 ************************************ 00:20:37.046 END TEST nvmf_fuzz 00:20:37.046 ************************************ 00:20:37.046 17:29:56 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:37.046 17:29:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:37.046 17:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.046 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.046 ************************************ 00:20:37.046 START TEST nvmf_multiconnection 00:20:37.046 ************************************ 00:20:37.046 17:29:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:37.306 * Looking for test storage... 00:20:37.306 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:37.306 17:29:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:37.306 17:29:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:37.306 17:29:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:37.306 17:29:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:37.306 17:29:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:37.306 17:29:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:37.306 17:29:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:37.306 17:29:56 -- scripts/common.sh@335 -- # IFS=.-: 00:20:37.306 17:29:56 -- scripts/common.sh@335 -- # read -ra ver1 00:20:37.306 17:29:56 -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.306 17:29:56 -- scripts/common.sh@336 -- # read -ra ver2 00:20:37.306 17:29:56 -- scripts/common.sh@337 -- # local 'op=<' 00:20:37.306 17:29:56 -- scripts/common.sh@339 -- # ver1_l=2 00:20:37.306 17:29:56 -- scripts/common.sh@340 -- # ver2_l=1 00:20:37.306 17:29:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:37.306 17:29:56 -- scripts/common.sh@343 -- # case "$op" in 00:20:37.306 17:29:56 -- scripts/common.sh@344 -- # : 1 00:20:37.306 17:29:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:37.306 17:29:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.306 17:29:56 -- scripts/common.sh@364 -- # decimal 1 00:20:37.306 17:29:56 -- scripts/common.sh@352 -- # local d=1 00:20:37.306 17:29:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.306 17:29:56 -- scripts/common.sh@354 -- # echo 1 00:20:37.306 17:29:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:37.306 17:29:56 -- scripts/common.sh@365 -- # decimal 2 00:20:37.306 17:29:56 -- scripts/common.sh@352 -- # local d=2 00:20:37.306 17:29:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.306 17:29:56 -- scripts/common.sh@354 -- # echo 2 00:20:37.306 17:29:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:37.306 17:29:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:37.306 17:29:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:37.306 17:29:56 -- scripts/common.sh@367 -- # return 0 00:20:37.306 17:29:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.306 17:29:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.306 --rc genhtml_branch_coverage=1 00:20:37.306 --rc genhtml_function_coverage=1 00:20:37.306 --rc genhtml_legend=1 00:20:37.306 --rc geninfo_all_blocks=1 00:20:37.306 --rc geninfo_unexecuted_blocks=1 00:20:37.306 00:20:37.306 ' 00:20:37.306 17:29:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.306 --rc genhtml_branch_coverage=1 00:20:37.306 --rc genhtml_function_coverage=1 00:20:37.306 --rc genhtml_legend=1 00:20:37.306 --rc geninfo_all_blocks=1 00:20:37.306 --rc geninfo_unexecuted_blocks=1 00:20:37.306 00:20:37.306 ' 00:20:37.306 17:29:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.306 --rc genhtml_branch_coverage=1 00:20:37.306 --rc genhtml_function_coverage=1 00:20:37.306 --rc genhtml_legend=1 00:20:37.306 --rc geninfo_all_blocks=1 00:20:37.306 --rc geninfo_unexecuted_blocks=1 00:20:37.306 00:20:37.306 ' 00:20:37.306 17:29:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.306 --rc genhtml_branch_coverage=1 00:20:37.306 --rc genhtml_function_coverage=1 00:20:37.306 --rc genhtml_legend=1 00:20:37.306 --rc geninfo_all_blocks=1 00:20:37.306 --rc geninfo_unexecuted_blocks=1 00:20:37.306 00:20:37.306 ' 00:20:37.306 17:29:56 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.306 17:29:56 -- nvmf/common.sh@7 -- # uname -s 00:20:37.306 17:29:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.306 17:29:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.306 17:29:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.306 17:29:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.306 17:29:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.306 17:29:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.306 17:29:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.306 17:29:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.306 17:29:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.306 17:29:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.306 17:29:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:37.306 17:29:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:37.306 17:29:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.306 17:29:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.306 17:29:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.306 17:29:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:37.306 17:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.306 17:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.306 17:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.306 17:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.306 17:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.306 17:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.306 17:29:56 -- paths/export.sh@5 -- # export PATH 00:20:37.306 17:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.306 17:29:56 -- nvmf/common.sh@46 -- # : 0 00:20:37.306 17:29:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:37.306 17:29:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:37.306 17:29:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:37.306 17:29:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.306 17:29:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.306 17:29:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:37.306 17:29:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:37.306 17:29:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:37.306 17:29:56 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.306 17:29:56 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.306 17:29:56 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:37.306 17:29:56 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:37.306 17:29:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:37.306 17:29:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.306 17:29:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:37.306 17:29:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:37.306 17:29:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:37.306 17:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.307 17:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.307 17:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.307 17:29:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:37.307 17:29:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:37.307 17:29:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:37.307 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:20:43.878 17:30:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:43.878 17:30:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:43.878 17:30:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:43.878 17:30:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:43.878 17:30:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:43.878 17:30:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:43.878 17:30:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:43.878 17:30:02 -- nvmf/common.sh@294 -- # net_devs=() 00:20:43.878 17:30:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:43.878 17:30:02 -- nvmf/common.sh@295 -- # e810=() 00:20:43.878 17:30:02 -- nvmf/common.sh@295 -- # local -ga e810 00:20:43.878 17:30:02 -- nvmf/common.sh@296 -- # x722=() 00:20:43.878 17:30:02 -- nvmf/common.sh@296 -- # local -ga x722 00:20:43.878 17:30:02 -- nvmf/common.sh@297 -- # mlx=() 00:20:43.878 17:30:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:43.878 17:30:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.878 17:30:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:43.878 17:30:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:43.878 17:30:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:43.878 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:43.878 17:30:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.878 17:30:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:43.878 17:30:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:43.878 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:43.878 17:30:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.878 17:30:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:43.878 17:30:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:43.878 17:30:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.878 17:30:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:43.878 17:30:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.878 17:30:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:43.878 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:43.878 17:30:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:43.878 17:30:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.878 17:30:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:43.878 17:30:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.878 17:30:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:43.878 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:43.878 17:30:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.878 17:30:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:43.878 17:30:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:43.878 17:30:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:43.878 17:30:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:43.878 17:30:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:43.878 17:30:02 -- nvmf/common.sh@57 -- # uname 00:20:43.878 17:30:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:43.878 17:30:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:43.878 17:30:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:43.878 17:30:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:43.878 17:30:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:43.878 17:30:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:43.878 17:30:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:43.878 17:30:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:43.878 17:30:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:43.878 17:30:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:43.878 17:30:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:43.878 17:30:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.878 17:30:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:43.878 17:30:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:43.878 17:30:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.878 17:30:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:43.878 17:30:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:43.878 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.878 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:43.878 17:30:03 -- nvmf/common.sh@104 -- # continue 2 00:20:43.878 17:30:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:43.878 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.878 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.878 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:43.878 17:30:03 -- nvmf/common.sh@104 -- # continue 2 00:20:43.878 17:30:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:43.878 17:30:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:43.878 17:30:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:43.878 17:30:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:43.878 17:30:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:43.878 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.878 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:43.878 altname enp217s0f0np0 00:20:43.878 altname ens818f0np0 00:20:43.878 inet 192.168.100.8/24 scope global mlx_0_0 00:20:43.878 valid_lft forever preferred_lft forever 00:20:43.878 17:30:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:43.878 17:30:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:43.878 17:30:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:43.878 17:30:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:43.878 17:30:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:43.878 17:30:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:43.878 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.878 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:43.878 altname enp217s0f1np1 00:20:43.878 altname ens818f1np1 00:20:43.878 inet 192.168.100.9/24 scope global mlx_0_1 00:20:43.878 valid_lft forever preferred_lft forever 00:20:43.878 17:30:03 -- nvmf/common.sh@410 -- # return 0 00:20:43.878 17:30:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:43.878 17:30:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:43.878 17:30:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:43.878 17:30:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:43.878 17:30:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:43.878 17:30:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.878 17:30:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:43.878 17:30:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:43.878 17:30:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.878 17:30:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:43.878 17:30:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:43.879 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.879 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.879 17:30:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:43.879 17:30:03 -- nvmf/common.sh@104 -- # continue 2 00:20:43.879 17:30:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:43.879 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.879 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.879 17:30:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.879 17:30:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.879 17:30:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:43.879 17:30:03 -- nvmf/common.sh@104 -- # continue 2 00:20:43.879 17:30:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:43.879 17:30:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:43.879 17:30:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:43.879 17:30:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:43.879 17:30:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:43.879 17:30:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:43.879 17:30:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:43.879 17:30:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:43.879 192.168.100.9' 00:20:43.879 17:30:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:43.879 192.168.100.9' 00:20:43.879 17:30:03 -- nvmf/common.sh@445 -- # head -n 1 00:20:43.879 17:30:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:43.879 17:30:03 -- nvmf/common.sh@446 -- # head -n 1 00:20:43.879 17:30:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:43.879 192.168.100.9' 00:20:43.879 17:30:03 -- nvmf/common.sh@446 -- # tail -n +2 00:20:43.879 17:30:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:43.879 17:30:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:43.879 17:30:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:43.879 17:30:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:43.879 17:30:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:43.879 17:30:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:43.879 17:30:03 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:43.879 17:30:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:43.879 17:30:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.879 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:43.879 17:30:03 -- nvmf/common.sh@469 -- # nvmfpid=2743301 00:20:43.879 17:30:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.879 17:30:03 -- nvmf/common.sh@470 -- # waitforlisten 2743301 00:20:43.879 17:30:03 -- common/autotest_common.sh@829 -- # '[' -z 2743301 ']' 00:20:43.879 17:30:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.879 17:30:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.879 17:30:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.879 17:30:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.879 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:43.879 [2024-11-09 17:30:03.262344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:43.879 [2024-11-09 17:30:03.262392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.879 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.879 [2024-11-09 17:30:03.333620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.879 [2024-11-09 17:30:03.413998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:43.879 [2024-11-09 17:30:03.414125] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.879 [2024-11-09 17:30:03.414137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.879 [2024-11-09 17:30:03.414148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.879 [2024-11-09 17:30:03.414194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.879 [2024-11-09 17:30:03.414293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.879 [2024-11-09 17:30:03.414355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.879 [2024-11-09 17:30:03.414358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.447 17:30:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.447 17:30:04 -- common/autotest_common.sh@862 -- # return 0 00:20:44.447 17:30:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:44.447 17:30:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.447 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 17:30:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.447 17:30:04 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:44.447 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.447 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 [2024-11-09 17:30:04.178798] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa88090/0xa8c580) succeed. 00:20:44.447 [2024-11-09 17:30:04.188049] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa89680/0xacdc20) succeed. 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:44.706 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.706 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 Malloc1 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 [2024-11-09 17:30:04.362248] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.706 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 Malloc2 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.706 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 Malloc3 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.706 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.706 Malloc4 00:20:44.706 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.706 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:44.706 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.706 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.965 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 Malloc5 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.965 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 Malloc6 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.965 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 Malloc7 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.965 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 Malloc8 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.965 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:44.965 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.965 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.966 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:20:44.966 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.966 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.966 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.966 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:44.966 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.966 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 Malloc9 00:20:44.966 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.966 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:44.966 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.966 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.966 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:44.966 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.966 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.966 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:20:44.966 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.966 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.225 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 Malloc10 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.225 17:30:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 Malloc11 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:20:45.225 17:30:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.225 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 17:30:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.225 17:30:04 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:45.225 17:30:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.225 17:30:04 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:46.162 17:30:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:46.162 17:30:05 -- common/autotest_common.sh@1187 -- # local i=0 00:20:46.162 17:30:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:46.162 17:30:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:46.162 17:30:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:48.065 17:30:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:48.065 17:30:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:48.065 17:30:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:20:48.324 17:30:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:48.324 17:30:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:48.324 17:30:07 -- common/autotest_common.sh@1197 -- # return 0 00:20:48.324 17:30:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.324 17:30:07 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:49.261 17:30:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:49.261 17:30:08 -- common/autotest_common.sh@1187 -- # local i=0 00:20:49.261 17:30:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.261 17:30:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:49.261 17:30:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:51.165 17:30:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:51.165 17:30:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:51.165 17:30:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:20:51.165 17:30:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:51.165 17:30:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.165 17:30:10 -- common/autotest_common.sh@1197 -- # return 0 00:20:51.165 17:30:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:51.165 17:30:10 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:52.101 17:30:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:52.101 17:30:11 -- common/autotest_common.sh@1187 -- # local i=0 00:20:52.101 17:30:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.101 17:30:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:52.101 17:30:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:54.635 17:30:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:54.635 17:30:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:54.635 17:30:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:20:54.635 17:30:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:54.635 17:30:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.635 17:30:13 -- common/autotest_common.sh@1197 -- # return 0 00:20:54.635 17:30:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.635 17:30:13 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:55.203 17:30:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:55.203 17:30:14 -- common/autotest_common.sh@1187 -- # local i=0 00:20:55.203 17:30:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.203 17:30:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:55.203 17:30:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:57.109 17:30:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:57.109 17:30:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:57.109 17:30:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:20:57.109 17:30:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:57.109 17:30:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:57.109 17:30:16 -- common/autotest_common.sh@1197 -- # return 0 00:20:57.109 17:30:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.109 17:30:16 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:58.487 17:30:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:58.487 17:30:17 -- common/autotest_common.sh@1187 -- # local i=0 00:20:58.487 17:30:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:58.487 17:30:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:58.487 17:30:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:00.391 17:30:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:00.391 17:30:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:00.391 17:30:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:00.391 17:30:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:00.391 17:30:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:00.391 17:30:19 -- common/autotest_common.sh@1197 -- # return 0 00:21:00.391 17:30:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:00.391 17:30:19 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:01.327 17:30:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:01.327 17:30:20 -- common/autotest_common.sh@1187 -- # local i=0 00:21:01.327 17:30:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.327 17:30:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:01.327 17:30:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:03.264 17:30:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:03.264 17:30:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:03.264 17:30:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:03.264 17:30:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:03.264 17:30:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:03.264 17:30:22 -- common/autotest_common.sh@1197 -- # return 0 00:21:03.264 17:30:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.264 17:30:22 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:04.200 17:30:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:04.200 17:30:23 -- common/autotest_common.sh@1187 -- # local i=0 00:21:04.200 17:30:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.200 17:30:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:04.201 17:30:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:06.737 17:30:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:06.737 17:30:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:06.737 17:30:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:06.737 17:30:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:06.737 17:30:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.737 17:30:25 -- common/autotest_common.sh@1197 -- # return 0 00:21:06.737 17:30:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.737 17:30:25 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:07.308 17:30:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:07.308 17:30:26 -- common/autotest_common.sh@1187 -- # local i=0 00:21:07.308 17:30:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.308 17:30:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:07.308 17:30:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:09.211 17:30:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:09.211 17:30:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:09.211 17:30:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:09.211 17:30:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:09.211 17:30:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.211 17:30:28 -- common/autotest_common.sh@1197 -- # return 0 00:21:09.211 17:30:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:09.211 17:30:28 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:10.686 17:30:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:10.686 17:30:29 -- common/autotest_common.sh@1187 -- # local i=0 00:21:10.686 17:30:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.686 17:30:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:10.686 17:30:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:12.627 17:30:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:12.627 17:30:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:12.627 17:30:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:12.627 17:30:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:12.627 17:30:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:12.627 17:30:31 -- common/autotest_common.sh@1197 -- # return 0 00:21:12.627 17:30:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.627 17:30:31 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:13.195 17:30:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:13.195 17:30:32 -- common/autotest_common.sh@1187 -- # local i=0 00:21:13.195 17:30:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.195 17:30:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:13.195 17:30:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:15.729 17:30:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:15.729 17:30:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:15.729 17:30:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:21:15.729 17:30:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:15.729 17:30:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.729 17:30:34 -- common/autotest_common.sh@1197 -- # return 0 00:21:15.729 17:30:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.729 17:30:34 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:16.297 17:30:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:16.297 17:30:35 -- common/autotest_common.sh@1187 -- # local i=0 00:21:16.297 17:30:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.297 17:30:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:16.297 17:30:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:18.201 17:30:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:18.201 17:30:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:18.201 17:30:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:21:18.460 17:30:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:18.460 17:30:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.460 17:30:37 -- common/autotest_common.sh@1197 -- # return 0 00:21:18.460 17:30:37 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:18.460 [global] 00:21:18.460 thread=1 00:21:18.460 invalidate=1 00:21:18.460 rw=read 00:21:18.460 time_based=1 00:21:18.460 runtime=10 00:21:18.460 ioengine=libaio 00:21:18.460 direct=1 00:21:18.460 bs=262144 00:21:18.460 iodepth=64 00:21:18.460 norandommap=1 00:21:18.460 numjobs=1 00:21:18.460 00:21:18.460 [job0] 00:21:18.460 filename=/dev/nvme0n1 00:21:18.460 [job1] 00:21:18.460 filename=/dev/nvme10n1 00:21:18.460 [job2] 00:21:18.460 filename=/dev/nvme1n1 00:21:18.460 [job3] 00:21:18.460 filename=/dev/nvme2n1 00:21:18.460 [job4] 00:21:18.460 filename=/dev/nvme3n1 00:21:18.460 [job5] 00:21:18.460 filename=/dev/nvme4n1 00:21:18.460 [job6] 00:21:18.460 filename=/dev/nvme5n1 00:21:18.460 [job7] 00:21:18.460 filename=/dev/nvme6n1 00:21:18.460 [job8] 00:21:18.460 filename=/dev/nvme7n1 00:21:18.460 [job9] 00:21:18.460 filename=/dev/nvme8n1 00:21:18.460 [job10] 00:21:18.460 filename=/dev/nvme9n1 00:21:18.736 Could not set queue depth (nvme0n1) 00:21:18.736 Could not set queue depth (nvme10n1) 00:21:18.736 Could not set queue depth (nvme1n1) 00:21:18.736 Could not set queue depth (nvme2n1) 00:21:18.736 Could not set queue depth (nvme3n1) 00:21:18.736 Could not set queue depth (nvme4n1) 00:21:18.736 Could not set queue depth (nvme5n1) 00:21:18.736 Could not set queue depth (nvme6n1) 00:21:18.736 Could not set queue depth (nvme7n1) 00:21:18.736 Could not set queue depth (nvme8n1) 00:21:18.736 Could not set queue depth (nvme9n1) 00:21:18.994 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.994 fio-3.35 00:21:18.994 Starting 11 threads 00:21:31.217 00:21:31.217 job0: (groupid=0, jobs=1): err= 0: pid=2750057: Sat Nov 9 17:30:49 2024 00:21:31.217 read: IOPS=1307, BW=327MiB/s (343MB/s)(3284MiB/10045msec) 00:21:31.217 slat (usec): min=12, max=33564, avg=749.44, stdev=2189.64 00:21:31.217 clat (usec): min=13092, max=95611, avg=48143.85, stdev=11219.88 00:21:31.217 lat (usec): min=13354, max=98833, avg=48893.29, stdev=11534.62 00:21:31.217 clat percentiles (usec): 00:21:31.217 | 1.00th=[29230], 5.00th=[30540], 10.00th=[31589], 20.00th=[34341], 00:21:31.217 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47449], 60.00th=[48497], 00:21:31.217 | 70.00th=[50070], 80.00th=[61604], 90.00th=[63701], 95.00th=[65274], 00:21:31.217 | 99.00th=[68682], 99.50th=[72877], 99.90th=[91751], 99.95th=[92799], 00:21:31.217 | 99.99th=[94897] 00:21:31.217 bw ( KiB/s): min=249330, max=504320, per=8.11%, avg=334710.15, stdev=78706.68, samples=20 00:21:31.217 iops : min= 973, max= 1970, avg=1307.30, stdev=307.50, samples=20 00:21:31.217 lat (msec) : 20=0.40%, 50=69.34%, 100=30.27% 00:21:31.217 cpu : usr=0.60%, sys=5.44%, ctx=2563, majf=0, minf=4097 00:21:31.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:31.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.217 issued rwts: total=13136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.217 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.217 job1: (groupid=0, jobs=1): err= 0: pid=2750070: Sat Nov 9 17:30:49 2024 00:21:31.217 read: IOPS=2263, BW=566MiB/s (593MB/s)(5682MiB/10039msec) 00:21:31.217 slat (usec): min=11, max=11046, avg=437.50, stdev=1069.93 00:21:31.217 clat (usec): min=8045, max=69156, avg=27802.47, stdev=5934.08 00:21:31.218 lat (usec): min=8284, max=69177, avg=28239.97, stdev=6061.25 00:21:31.218 clat percentiles (usec): 00:21:31.218 | 1.00th=[13435], 5.00th=[14877], 10.00th=[15401], 20.00th=[28181], 00:21:31.218 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29754], 60.00th=[30278], 00:21:31.218 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31589], 95.00th=[32637], 00:21:31.218 | 99.00th=[36439], 99.50th=[38536], 99.90th=[51643], 99.95th=[63177], 00:21:31.218 | 99.99th=[68682] 00:21:31.218 bw ( KiB/s): min=512512, max=1037285, per=14.06%, avg=580026.20, stdev=144384.97, samples=20 00:21:31.218 iops : min= 2002, max= 4051, avg=2265.65, stdev=563.79, samples=20 00:21:31.218 lat (msec) : 10=0.14%, 20=16.03%, 50=83.71%, 100=0.12% 00:21:31.218 cpu : usr=0.66%, sys=6.65%, ctx=4209, majf=0, minf=4097 00:21:31.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:31.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.218 issued rwts: total=22726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.218 job2: (groupid=0, jobs=1): err= 0: pid=2750084: Sat Nov 9 17:30:49 2024 00:21:31.218 read: IOPS=954, BW=239MiB/s (250MB/s)(2400MiB/10054msec) 00:21:31.218 slat (usec): min=11, max=37417, avg=996.12, stdev=3451.64 00:21:31.218 clat (msec): min=13, max=116, avg=65.97, stdev=14.44 00:21:31.218 lat (msec): min=13, max=144, avg=66.96, stdev=14.98 00:21:31.218 clat percentiles (msec): 00:21:31.218 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:21:31.218 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 77], 00:21:31.218 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:21:31.218 | 99.00th=[ 92], 99.50th=[ 102], 99.90th=[ 115], 99.95th=[ 116], 00:21:31.218 | 99.99th=[ 116] 00:21:31.218 bw ( KiB/s): min=190083, max=342528, per=5.92%, avg=244051.75, stdev=49408.26, samples=20 00:21:31.218 iops : min= 742, max= 1338, avg=953.25, stdev=193.07, samples=20 00:21:31.218 lat (msec) : 20=0.30%, 50=24.40%, 100=74.70%, 250=0.59% 00:21:31.218 cpu : usr=0.36%, sys=4.08%, ctx=2049, majf=0, minf=4097 00:21:31.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:31.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.218 issued rwts: total=9598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.218 job3: (groupid=0, jobs=1): err= 0: pid=2750098: Sat Nov 9 17:30:49 2024 00:21:31.218 read: IOPS=2396, BW=599MiB/s (628MB/s)(6013MiB/10037msec) 00:21:31.218 slat (usec): min=11, max=17415, avg=412.01, stdev=1027.62 00:21:31.218 clat (usec): min=9012, max=65978, avg=26262.96, stdev=7063.77 00:21:31.218 lat (usec): min=9329, max=66017, avg=26674.97, stdev=7195.43 00:21:31.218 clat percentiles (usec): 00:21:31.218 | 1.00th=[13435], 5.00th=[14484], 10.00th=[15008], 20.00th=[15664], 00:21:31.218 | 30.00th=[28181], 40.00th=[28967], 50.00th=[29492], 60.00th=[30016], 00:21:31.218 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31589], 95.00th=[32637], 00:21:31.218 | 99.00th=[36439], 99.50th=[38011], 99.90th=[60556], 99.95th=[64226], 00:21:31.218 | 99.99th=[65799] 00:21:31.218 bw ( KiB/s): min=517130, max=1067094, per=14.89%, avg=614110.60, stdev=187449.64, samples=20 00:21:31.218 iops : min= 2020, max= 4168, avg=2398.75, stdev=732.23, samples=20 00:21:31.218 lat (msec) : 10=0.02%, 20=27.11%, 50=72.63%, 100=0.25% 00:21:31.218 cpu : usr=0.63%, sys=7.29%, ctx=4388, majf=0, minf=4097 00:21:31.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:31.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.218 issued rwts: total=24050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.218 job4: (groupid=0, jobs=1): err= 0: pid=2750106: Sat Nov 9 17:30:49 2024 00:21:31.218 read: IOPS=901, BW=225MiB/s (236MB/s)(2266MiB/10050msec) 00:21:31.218 slat (usec): min=12, max=31495, avg=1090.85, stdev=2712.92 00:21:31.218 clat (msec): min=12, max=125, avg=69.79, stdev=11.18 00:21:31.218 lat (msec): min=12, max=125, avg=70.89, stdev=11.59 00:21:31.218 clat percentiles (msec): 00:21:31.218 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 62], 20.00th=[ 63], 00:21:31.218 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 78], 00:21:31.218 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:21:31.218 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 114], 99.95th=[ 123], 00:21:31.218 | 99.99th=[ 126] 00:21:31.218 bw ( KiB/s): min=198144, max=332135, per=5.58%, avg=230382.75, stdev=34815.45, samples=20 00:21:31.218 iops : min= 774, max= 1297, avg=899.80, stdev=136.03, samples=20 00:21:31.218 lat (msec) : 20=0.38%, 50=5.94%, 100=93.37%, 250=0.32% 00:21:31.218 cpu : usr=0.43%, sys=4.47%, ctx=1790, majf=0, minf=4097 00:21:31.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:31.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.218 issued rwts: total=9064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.218 job5: (groupid=0, jobs=1): err= 0: pid=2750137: Sat Nov 9 17:30:49 2024 00:21:31.218 read: IOPS=2801, BW=700MiB/s (734MB/s)(7031MiB/10039msec) 00:21:31.218 slat (usec): min=10, max=9563, avg=352.07, stdev=846.40 00:21:31.218 clat (usec): min=9050, max=69272, avg=22467.72, stdev=7828.26 00:21:31.218 lat (usec): min=9266, max=69298, avg=22819.79, stdev=7957.99 00:21:31.218 clat percentiles (usec): 00:21:31.218 | 1.00th=[13304], 5.00th=[13829], 10.00th=[14615], 20.00th=[15139], 00:21:31.218 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16909], 60.00th=[28967], 00:21:31.218 | 70.00th=[30016], 80.00th=[30802], 90.00th=[31327], 95.00th=[32375], 00:21:31.218 | 99.00th=[34341], 99.50th=[35390], 99.90th=[57934], 99.95th=[61604], 00:21:31.218 | 99.99th=[67634] 00:21:31.218 bw ( KiB/s): min=520192, max=1068544, per=17.41%, avg=718331.40, stdev=253147.84, samples=20 00:21:31.218 iops : min= 2032, max= 4174, avg=2805.90, stdev=988.92, samples=20 00:21:31.218 lat (msec) : 10=0.04%, 20=52.86%, 50=46.97%, 100=0.13% 00:21:31.218 cpu : usr=0.66%, sys=7.62%, ctx=5188, majf=0, minf=4097 00:21:31.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:31.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.218 issued rwts: total=28125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.219 job6: (groupid=0, jobs=1): err= 0: pid=2750151: Sat Nov 9 17:30:49 2024 00:21:31.219 read: IOPS=882, BW=221MiB/s (231MB/s)(2218MiB/10050msec) 00:21:31.219 slat (usec): min=10, max=23265, avg=1117.26, stdev=3107.45 00:21:31.219 clat (msec): min=15, max=140, avg=71.31, stdev= 9.53 00:21:31.219 lat (msec): min=16, max=140, avg=72.43, stdev=10.05 00:21:31.219 clat percentiles (msec): 00:21:31.219 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:21:31.219 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 79], 00:21:31.219 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:21:31.219 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 121], 99.95th=[ 124], 00:21:31.219 | 99.99th=[ 142] 00:21:31.219 bw ( KiB/s): min=195072, max=256512, per=5.47%, avg=225474.65, stdev=26093.18, samples=20 00:21:31.219 iops : min= 762, max= 1002, avg=880.60, stdev=102.00, samples=20 00:21:31.219 lat (msec) : 20=0.15%, 50=0.45%, 100=98.84%, 250=0.56% 00:21:31.219 cpu : usr=0.33%, sys=3.83%, ctx=1654, majf=0, minf=4097 00:21:31.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:31.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.219 issued rwts: total=8870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.219 job7: (groupid=0, jobs=1): err= 0: pid=2750162: Sat Nov 9 17:30:49 2024 00:21:31.219 read: IOPS=1304, BW=326MiB/s (342MB/s)(3278MiB/10047msec) 00:21:31.219 slat (usec): min=12, max=21467, avg=759.43, stdev=2041.14 00:21:31.219 clat (usec): min=13188, max=95950, avg=48229.83, stdev=11113.78 00:21:31.219 lat (usec): min=13457, max=95978, avg=48989.26, stdev=11414.46 00:21:31.219 clat percentiles (usec): 00:21:31.219 | 1.00th=[29492], 5.00th=[30540], 10.00th=[31589], 20.00th=[34341], 00:21:31.219 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47973], 60.00th=[48497], 00:21:31.219 | 70.00th=[50070], 80.00th=[61604], 90.00th=[63701], 95.00th=[65274], 00:21:31.219 | 99.00th=[69731], 99.50th=[72877], 99.90th=[79168], 99.95th=[80217], 00:21:31.219 | 99.99th=[95945] 00:21:31.219 bw ( KiB/s): min=248320, max=505856, per=8.10%, avg=334020.15, stdev=80144.42, samples=20 00:21:31.219 iops : min= 970, max= 1976, avg=1304.65, stdev=313.12, samples=20 00:21:31.219 lat (msec) : 20=0.18%, 50=69.46%, 100=30.36% 00:21:31.219 cpu : usr=0.49%, sys=5.65%, ctx=2483, majf=0, minf=4097 00:21:31.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:31.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.219 issued rwts: total=13111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.219 job8: (groupid=0, jobs=1): err= 0: pid=2750198: Sat Nov 9 17:30:49 2024 00:21:31.219 read: IOPS=1516, BW=379MiB/s (398MB/s)(3810MiB/10047msec) 00:21:31.219 slat (usec): min=11, max=25801, avg=646.79, stdev=1835.05 00:21:31.219 clat (usec): min=8927, max=89404, avg=41503.09, stdev=13074.00 00:21:31.219 lat (usec): min=9145, max=89462, avg=42149.88, stdev=13354.15 00:21:31.219 clat percentiles (usec): 00:21:31.219 | 1.00th=[28181], 5.00th=[28967], 10.00th=[29230], 20.00th=[30278], 00:21:31.219 | 30.00th=[30802], 40.00th=[31327], 50.00th=[33162], 60.00th=[46924], 00:21:31.219 | 70.00th=[47973], 80.00th=[50594], 90.00th=[63177], 95.00th=[64750], 00:21:31.219 | 99.00th=[68682], 99.50th=[72877], 99.90th=[86508], 99.95th=[88605], 00:21:31.219 | 99.99th=[89654] 00:21:31.219 bw ( KiB/s): min=251392, max=531456, per=9.42%, avg=388446.10, stdev=114630.60, samples=20 00:21:31.219 iops : min= 982, max= 2076, avg=1517.30, stdev=447.81, samples=20 00:21:31.219 lat (msec) : 10=0.07%, 20=0.51%, 50=78.21%, 100=21.22% 00:21:31.219 cpu : usr=0.61%, sys=5.44%, ctx=2974, majf=0, minf=4097 00:21:31.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:31.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.219 issued rwts: total=15238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.219 job9: (groupid=0, jobs=1): err= 0: pid=2750211: Sat Nov 9 17:30:49 2024 00:21:31.219 read: IOPS=902, BW=226MiB/s (237MB/s)(2269MiB/10055msec) 00:21:31.219 slat (usec): min=14, max=32437, avg=1098.57, stdev=3183.24 00:21:31.219 clat (msec): min=10, max=116, avg=69.74, stdev=11.49 00:21:31.219 lat (msec): min=10, max=117, avg=70.84, stdev=12.01 00:21:31.219 clat percentiles (msec): 00:21:31.219 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 63], 00:21:31.219 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 79], 00:21:31.219 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:21:31.219 | 99.00th=[ 91], 99.50th=[ 101], 99.90th=[ 113], 99.95th=[ 115], 00:21:31.219 | 99.99th=[ 117] 00:21:31.219 bw ( KiB/s): min=194048, max=335360, per=5.59%, avg=230686.25, stdev=35410.57, samples=20 00:21:31.219 iops : min= 758, max= 1310, avg=901.05, stdev=138.39, samples=20 00:21:31.219 lat (msec) : 20=0.44%, 50=6.31%, 100=92.72%, 250=0.53% 00:21:31.219 cpu : usr=0.41%, sys=4.47%, ctx=1744, majf=0, minf=3659 00:21:31.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:31.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.219 issued rwts: total=9075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.219 job10: (groupid=0, jobs=1): err= 0: pid=2750223: Sat Nov 9 17:30:49 2024 00:21:31.219 read: IOPS=900, BW=225MiB/s (236MB/s)(2263MiB/10053msec) 00:21:31.219 slat (usec): min=12, max=29284, avg=1100.14, stdev=3150.31 00:21:31.219 clat (msec): min=11, max=130, avg=69.90, stdev=11.39 00:21:31.219 lat (msec): min=12, max=130, avg=71.00, stdev=11.89 00:21:31.219 clat percentiles (msec): 00:21:31.219 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 62], 20.00th=[ 63], 00:21:31.219 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 78], 00:21:31.219 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:21:31.219 | 99.00th=[ 94], 99.50th=[ 105], 99.90th=[ 127], 99.95th=[ 129], 00:21:31.219 | 99.99th=[ 131] 00:21:31.219 bw ( KiB/s): min=191488, max=333979, per=5.58%, avg=230130.90, stdev=35411.01, samples=20 00:21:31.219 iops : min= 748, max= 1304, avg=898.85, stdev=138.29, samples=20 00:21:31.219 lat (msec) : 20=0.36%, 50=6.08%, 100=92.98%, 250=0.57% 00:21:31.220 cpu : usr=0.31%, sys=3.88%, ctx=1719, majf=0, minf=4097 00:21:31.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:31.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.220 issued rwts: total=9052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.220 00:21:31.220 Run status group 0 (all jobs): 00:21:31.220 READ: bw=4029MiB/s (4225MB/s), 221MiB/s-700MiB/s (231MB/s-734MB/s), io=39.6GiB (42.5GB), run=10037-10055msec 00:21:31.220 00:21:31.220 Disk stats (read/write): 00:21:31.220 nvme0n1: ios=26193/0, merge=0/0, ticks=1238378/0, in_queue=1238378, util=95.62% 00:21:31.220 nvme10n1: ios=45352/0, merge=0/0, ticks=1233761/0, in_queue=1233761, util=95.96% 00:21:31.220 nvme1n1: ios=19070/0, merge=0/0, ticks=1233822/0, in_queue=1233822, util=96.46% 00:21:31.220 nvme2n1: ios=48014/0, merge=0/0, ticks=1232650/0, in_queue=1232650, util=96.72% 00:21:31.220 nvme3n1: ios=18021/0, merge=0/0, ticks=1236629/0, in_queue=1236629, util=96.86% 00:21:31.220 nvme4n1: ios=56156/0, merge=0/0, ticks=1231375/0, in_queue=1231375, util=97.48% 00:21:31.220 nvme5n1: ios=17645/0, merge=0/0, ticks=1235243/0, in_queue=1235243, util=97.73% 00:21:31.220 nvme6n1: ios=26105/0, merge=0/0, ticks=1235845/0, in_queue=1235845, util=97.94% 00:21:31.220 nvme7n1: ios=30391/0, merge=0/0, ticks=1234843/0, in_queue=1234843, util=98.68% 00:21:31.220 nvme8n1: ios=18035/0, merge=0/0, ticks=1236424/0, in_queue=1236424, util=99.04% 00:21:31.220 nvme9n1: ios=18031/0, merge=0/0, ticks=1237657/0, in_queue=1237657, util=99.25% 00:21:31.220 17:30:49 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:31.220 [global] 00:21:31.220 thread=1 00:21:31.220 invalidate=1 00:21:31.220 rw=randwrite 00:21:31.220 time_based=1 00:21:31.220 runtime=10 00:21:31.220 ioengine=libaio 00:21:31.220 direct=1 00:21:31.220 bs=262144 00:21:31.220 iodepth=64 00:21:31.220 norandommap=1 00:21:31.220 numjobs=1 00:21:31.220 00:21:31.220 [job0] 00:21:31.220 filename=/dev/nvme0n1 00:21:31.220 [job1] 00:21:31.220 filename=/dev/nvme10n1 00:21:31.220 [job2] 00:21:31.220 filename=/dev/nvme1n1 00:21:31.220 [job3] 00:21:31.220 filename=/dev/nvme2n1 00:21:31.220 [job4] 00:21:31.220 filename=/dev/nvme3n1 00:21:31.220 [job5] 00:21:31.220 filename=/dev/nvme4n1 00:21:31.220 [job6] 00:21:31.220 filename=/dev/nvme5n1 00:21:31.220 [job7] 00:21:31.220 filename=/dev/nvme6n1 00:21:31.220 [job8] 00:21:31.220 filename=/dev/nvme7n1 00:21:31.220 [job9] 00:21:31.220 filename=/dev/nvme8n1 00:21:31.220 [job10] 00:21:31.220 filename=/dev/nvme9n1 00:21:31.220 Could not set queue depth (nvme0n1) 00:21:31.220 Could not set queue depth (nvme10n1) 00:21:31.220 Could not set queue depth (nvme1n1) 00:21:31.220 Could not set queue depth (nvme2n1) 00:21:31.220 Could not set queue depth (nvme3n1) 00:21:31.220 Could not set queue depth (nvme4n1) 00:21:31.220 Could not set queue depth (nvme5n1) 00:21:31.220 Could not set queue depth (nvme6n1) 00:21:31.220 Could not set queue depth (nvme7n1) 00:21:31.220 Could not set queue depth (nvme8n1) 00:21:31.220 Could not set queue depth (nvme9n1) 00:21:31.220 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.220 fio-3.35 00:21:31.220 Starting 11 threads 00:21:41.223 00:21:41.223 job0: (groupid=0, jobs=1): err= 0: pid=2752048: Sat Nov 9 17:31:00 2024 00:21:41.223 write: IOPS=2983, BW=746MiB/s (782MB/s)(7482MiB/10031msec); 0 zone resets 00:21:41.223 slat (usec): min=16, max=35600, avg=326.76, stdev=717.97 00:21:41.223 clat (msec): min=2, max=122, avg=21.12, stdev=11.22 00:21:41.223 lat (msec): min=2, max=122, avg=21.45, stdev=11.37 00:21:41.223 clat percentiles (msec): 00:21:41.223 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 17], 00:21:41.223 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 18], 00:21:41.223 | 70.00th=[ 19], 80.00th=[ 19], 90.00th=[ 36], 95.00th=[ 37], 00:21:41.223 | 99.00th=[ 89], 99.50th=[ 92], 99.90th=[ 96], 99.95th=[ 105], 00:21:41.223 | 99.99th=[ 123] 00:21:41.223 bw ( KiB/s): min=225792, max=1021440, per=21.69%, avg=764311.05, stdev=244528.85, samples=20 00:21:41.223 iops : min= 882, max= 3990, avg=2985.50, stdev=955.23, samples=20 00:21:41.223 lat (msec) : 4=0.02%, 10=0.15%, 20=83.55%, 50=14.42%, 100=1.80% 00:21:41.223 lat (msec) : 250=0.06% 00:21:41.223 cpu : usr=4.40%, sys=6.30%, ctx=6259, majf=0, minf=79 00:21:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.223 issued rwts: total=0,29926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.223 job1: (groupid=0, jobs=1): err= 0: pid=2752078: Sat Nov 9 17:31:00 2024 00:21:41.223 write: IOPS=834, BW=209MiB/s (219MB/s)(2097MiB/10055msec); 0 zone resets 00:21:41.223 slat (usec): min=32, max=26980, avg=1186.89, stdev=2091.58 00:21:41.223 clat (msec): min=31, max=127, avg=75.51, stdev= 7.08 00:21:41.223 lat (msec): min=31, max=127, avg=76.70, stdev= 7.02 00:21:41.223 clat percentiles (msec): 00:21:41.223 | 1.00th=[ 64], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:21:41.223 | 30.00th=[ 73], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 75], 00:21:41.223 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 90], 00:21:41.223 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 118], 99.95th=[ 124], 00:21:41.223 | 99.99th=[ 128] 00:21:41.223 bw ( KiB/s): min=188928, max=225792, per=6.04%, avg=212964.95, stdev=12895.25, samples=20 00:21:41.223 iops : min= 738, max= 882, avg=831.80, stdev=50.38, samples=20 00:21:41.223 lat (msec) : 50=0.29%, 100=99.44%, 250=0.27% 00:21:41.223 cpu : usr=2.18%, sys=3.69%, ctx=2086, majf=0, minf=203 00:21:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.223 issued rwts: total=0,8387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.223 job2: (groupid=0, jobs=1): err= 0: pid=2752092: Sat Nov 9 17:31:00 2024 00:21:41.223 write: IOPS=897, BW=224MiB/s (235MB/s)(2254MiB/10043msec); 0 zone resets 00:21:41.223 slat (usec): min=28, max=15722, avg=1103.53, stdev=2175.17 00:21:41.223 clat (msec): min=4, max=101, avg=70.16, stdev=13.96 00:21:41.223 lat (msec): min=4, max=101, avg=71.26, stdev=14.22 00:21:41.223 clat percentiles (msec): 00:21:41.223 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 55], 00:21:41.223 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 77], 00:21:41.223 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 90], 00:21:41.223 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 101], 00:21:41.223 | 99.99th=[ 103] 00:21:41.223 bw ( KiB/s): min=182784, max=298922, per=6.50%, avg=229088.50, stdev=44948.97, samples=20 00:21:41.223 iops : min= 714, max= 1167, avg=894.65, stdev=175.44, samples=20 00:21:41.223 lat (msec) : 10=0.02%, 20=0.10%, 50=0.54%, 100=99.28%, 250=0.06% 00:21:41.223 cpu : usr=2.22%, sys=4.00%, ctx=2209, majf=0, minf=142 00:21:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.223 issued rwts: total=0,9017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job3: (groupid=0, jobs=1): err= 0: pid=2752095: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=1165, BW=291MiB/s (306MB/s)(2927MiB/10043msec); 0 zone resets 00:21:41.224 slat (usec): min=21, max=18501, avg=839.11, stdev=1838.32 00:21:41.224 clat (msec): min=4, max=100, avg=54.04, stdev=25.78 00:21:41.224 lat (msec): min=4, max=101, avg=54.87, stdev=26.19 00:21:41.224 clat percentiles (msec): 00:21:41.224 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 20], 00:21:41.224 | 30.00th=[ 37], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 57], 00:21:41.224 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 90], 00:21:41.224 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 100], 00:21:41.224 | 99.99th=[ 101] 00:21:41.224 bw ( KiB/s): min=182784, max=866304, per=8.46%, avg=297982.60, stdev=188031.42, samples=20 00:21:41.224 iops : min= 714, max= 3384, avg=1163.80, stdev=734.54, samples=20 00:21:41.224 lat (msec) : 10=0.15%, 20=23.86%, 50=10.40%, 100=65.55%, 250=0.03% 00:21:41.224 cpu : usr=2.63%, sys=4.06%, ctx=2763, majf=0, minf=200 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,11709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job4: (groupid=0, jobs=1): err= 0: pid=2752096: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=1195, BW=299MiB/s (313MB/s)(2997MiB/10028msec); 0 zone resets 00:21:41.224 slat (usec): min=20, max=16177, avg=821.43, stdev=1856.00 00:21:41.224 clat (msec): min=6, max=103, avg=52.70, stdev=26.87 00:21:41.224 lat (msec): min=8, max=104, avg=53.53, stdev=27.31 00:21:41.224 clat percentiles (msec): 00:21:41.224 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 20], 00:21:41.224 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 70], 00:21:41.224 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 90], 00:21:41.224 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 101], 00:21:41.224 | 99.99th=[ 104] 00:21:41.224 bw ( KiB/s): min=180224, max=820736, per=8.66%, avg=304984.45, stdev=187914.90, samples=20 00:21:41.224 iops : min= 704, max= 3206, avg=1191.20, stdev=733.99, samples=20 00:21:41.224 lat (msec) : 10=0.08%, 20=21.12%, 50=30.38%, 100=48.39%, 250=0.03% 00:21:41.224 cpu : usr=2.45%, sys=3.95%, ctx=2883, majf=0, minf=12 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,11986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job5: (groupid=0, jobs=1): err= 0: pid=2752097: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=835, BW=209MiB/s (219MB/s)(2099MiB/10055msec); 0 zone resets 00:21:41.224 slat (usec): min=29, max=12684, avg=1185.35, stdev=2081.28 00:21:41.224 clat (msec): min=16, max=129, avg=75.44, stdev= 7.66 00:21:41.224 lat (msec): min=16, max=129, avg=76.62, stdev= 7.59 00:21:41.224 clat percentiles (msec): 00:21:41.224 | 1.00th=[ 63], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:21:41.224 | 30.00th=[ 73], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 74], 00:21:41.224 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 90], 00:21:41.224 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 122], 99.95th=[ 128], 00:21:41.224 | 99.99th=[ 130] 00:21:41.224 bw ( KiB/s): min=188416, max=224768, per=6.05%, avg=213215.50, stdev=12514.47, samples=20 00:21:41.224 iops : min= 736, max= 878, avg=832.75, stdev=48.92, samples=20 00:21:41.224 lat (msec) : 20=0.10%, 50=0.38%, 100=99.20%, 250=0.32% 00:21:41.224 cpu : usr=2.16%, sys=3.82%, ctx=2085, majf=0, minf=11 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,8396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job6: (groupid=0, jobs=1): err= 0: pid=2752098: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=2099, BW=525MiB/s (550MB/s)(5268MiB/10034msec); 0 zone resets 00:21:41.224 slat (usec): min=15, max=15013, avg=461.26, stdev=1137.39 00:21:41.224 clat (usec): min=814, max=96969, avg=30010.34, stdev=17921.46 00:21:41.224 lat (usec): min=961, max=97041, avg=30471.61, stdev=18196.42 00:21:41.224 clat percentiles (usec): 00:21:41.224 | 1.00th=[10421], 5.00th=[16712], 10.00th=[16909], 20.00th=[17433], 00:21:41.224 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[34341], 00:21:41.224 | 70.00th=[35390], 80.00th=[36439], 90.00th=[44827], 95.00th=[79168], 00:21:41.224 | 99.00th=[86508], 99.50th=[88605], 99.90th=[91751], 99.95th=[93848], 00:21:41.224 | 99.99th=[95945] 00:21:41.224 bw ( KiB/s): min=196096, max=916480, per=15.25%, avg=537417.80, stdev=266065.86, samples=20 00:21:41.224 iops : min= 766, max= 3580, avg=2099.10, stdev=1039.22, samples=20 00:21:41.224 lat (usec) : 1000=0.01% 00:21:41.224 lat (msec) : 2=0.11%, 4=0.21%, 10=0.63%, 20=51.63%, 50=37.78% 00:21:41.224 lat (msec) : 100=9.63% 00:21:41.224 cpu : usr=3.14%, sys=4.67%, ctx=5001, majf=0, minf=138 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,21070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job7: (groupid=0, jobs=1): err= 0: pid=2752099: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=837, BW=209MiB/s (220MB/s)(2106MiB/10056msec); 0 zone resets 00:21:41.224 slat (usec): min=28, max=20480, avg=1172.12, stdev=2056.21 00:21:41.224 clat (msec): min=16, max=128, avg=75.20, stdev= 8.02 00:21:41.224 lat (msec): min=16, max=128, avg=76.37, stdev= 8.01 00:21:41.224 clat percentiles (msec): 00:21:41.224 | 1.00th=[ 46], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:21:41.224 | 30.00th=[ 73], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 74], 00:21:41.224 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 90], 00:21:41.224 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 120], 99.95th=[ 125], 00:21:41.224 | 99.99th=[ 129] 00:21:41.224 bw ( KiB/s): min=188928, max=240128, per=6.07%, avg=213932.60, stdev=13872.86, samples=20 00:21:41.224 iops : min= 738, max= 938, avg=835.55, stdev=54.23, samples=20 00:21:41.224 lat (msec) : 20=0.09%, 50=0.99%, 100=98.65%, 250=0.27% 00:21:41.224 cpu : usr=2.13%, sys=3.82%, ctx=2127, majf=0, minf=19 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,8424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job8: (groupid=0, jobs=1): err= 0: pid=2752100: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=897, BW=224MiB/s (235MB/s)(2254MiB/10043msec); 0 zone resets 00:21:41.224 slat (usec): min=30, max=18527, avg=1103.84, stdev=2179.15 00:21:41.224 clat (msec): min=12, max=102, avg=70.18, stdev=13.94 00:21:41.224 lat (msec): min=12, max=107, avg=71.28, stdev=14.19 00:21:41.224 clat percentiles (msec): 00:21:41.224 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:21:41.224 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 77], 00:21:41.224 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:21:41.224 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 103], 00:21:41.224 | 99.99th=[ 103] 00:21:41.224 bw ( KiB/s): min=182272, max=298411, per=6.50%, avg=228991.40, stdev=44903.75, samples=20 00:21:41.224 iops : min= 712, max= 1165, avg=894.30, stdev=175.26, samples=20 00:21:41.224 lat (msec) : 20=0.10%, 50=0.67%, 100=99.12%, 250=0.11% 00:21:41.224 cpu : usr=2.24%, sys=3.97%, ctx=2230, majf=0, minf=204 00:21:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.224 issued rwts: total=0,9014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.224 job9: (groupid=0, jobs=1): err= 0: pid=2752101: Sat Nov 9 17:31:00 2024 00:21:41.224 write: IOPS=871, BW=218MiB/s (228MB/s)(2191MiB/10055msec); 0 zone resets 00:21:41.225 slat (usec): min=21, max=16331, avg=1113.62, stdev=2016.61 00:21:41.225 clat (msec): min=5, max=125, avg=72.30, stdev=14.51 00:21:41.225 lat (msec): min=5, max=125, avg=73.42, stdev=14.66 00:21:41.225 clat percentiles (msec): 00:21:41.225 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 68], 20.00th=[ 70], 00:21:41.225 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 75], 00:21:41.225 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 90], 00:21:41.225 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 116], 99.95th=[ 124], 00:21:41.225 | 99.99th=[ 126] 00:21:41.225 bw ( KiB/s): min=183296, max=345600, per=6.32%, avg=222565.05, stdev=35339.78, samples=20 00:21:41.225 iops : min= 716, max= 1350, avg=869.30, stdev=138.08, samples=20 00:21:41.225 lat (msec) : 10=0.17%, 20=0.75%, 50=7.43%, 100=91.34%, 250=0.31% 00:21:41.225 cpu : usr=2.38%, sys=3.37%, ctx=2220, majf=0, minf=154 00:21:41.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:41.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.225 issued rwts: total=0,8762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.225 job10: (groupid=0, jobs=1): err= 0: pid=2752102: Sat Nov 9 17:31:00 2024 00:21:41.225 write: IOPS=1166, BW=292MiB/s (306MB/s)(2927MiB/10032msec); 0 zone resets 00:21:41.225 slat (usec): min=21, max=19473, avg=844.11, stdev=1957.48 00:21:41.225 clat (msec): min=15, max=104, avg=53.99, stdev=22.85 00:21:41.225 lat (msec): min=15, max=107, avg=54.83, stdev=23.25 00:21:41.225 clat percentiles (msec): 00:21:41.225 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:21:41.225 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 58], 00:21:41.225 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 90], 00:21:41.225 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 104], 99.95th=[ 104], 00:21:41.225 | 99.99th=[ 105] 00:21:41.225 bw ( KiB/s): min=180736, max=459264, per=8.45%, avg=297844.15, stdev=125431.75, samples=20 00:21:41.225 iops : min= 706, max= 1794, avg=1163.30, stdev=489.90, samples=20 00:21:41.225 lat (msec) : 20=0.13%, 50=58.76%, 100=40.89%, 250=0.23% 00:21:41.225 cpu : usr=2.47%, sys=3.81%, ctx=2822, majf=0, minf=208 00:21:41.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:41.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.225 issued rwts: total=0,11706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.225 00:21:41.225 Run status group 0 (all jobs): 00:21:41.225 WRITE: bw=3441MiB/s (3608MB/s), 209MiB/s-746MiB/s (219MB/s-782MB/s), io=33.8GiB (36.3GB), run=10028-10056msec 00:21:41.225 00:21:41.225 Disk stats (read/write): 00:21:41.225 nvme0n1: ios=49/59292, merge=0/0, ticks=12/1227998, in_queue=1228010, util=96.70% 00:21:41.225 nvme10n1: ios=0/16433, merge=0/0, ticks=0/1209191, in_queue=1209191, util=96.84% 00:21:41.225 nvme1n1: ios=0/17618, merge=0/0, ticks=0/1214474, in_queue=1214474, util=97.19% 00:21:41.225 nvme2n1: ios=0/22997, merge=0/0, ticks=0/1215690, in_queue=1215690, util=97.37% 00:21:41.225 nvme3n1: ios=0/23406, merge=0/0, ticks=0/1217736, in_queue=1217736, util=97.45% 00:21:41.225 nvme4n1: ios=0/16458, merge=0/0, ticks=0/1212937, in_queue=1212937, util=97.85% 00:21:41.225 nvme5n1: ios=0/41582, merge=0/0, ticks=0/1215697, in_queue=1215697, util=98.02% 00:21:41.225 nvme6n1: ios=0/16508, merge=0/0, ticks=0/1213509, in_queue=1213509, util=98.13% 00:21:41.225 nvme7n1: ios=0/17613, merge=0/0, ticks=0/1214086, in_queue=1214086, util=98.58% 00:21:41.225 nvme8n1: ios=0/17183, merge=0/0, ticks=0/1214915, in_queue=1214915, util=98.79% 00:21:41.225 nvme9n1: ios=0/22849, merge=0/0, ticks=0/1214730, in_queue=1214730, util=98.94% 00:21:41.225 17:31:00 -- target/multiconnection.sh@36 -- # sync 00:21:41.225 17:31:00 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:41.225 17:31:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.225 17:31:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:41.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:41.485 17:31:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:41.485 17:31:01 -- common/autotest_common.sh@1208 -- # local i=0 00:21:41.485 17:31:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:41.485 17:31:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:21:41.485 17:31:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:41.485 17:31:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:21:41.485 17:31:01 -- common/autotest_common.sh@1220 -- # return 0 00:21:41.485 17:31:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.485 17:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.485 17:31:01 -- common/autotest_common.sh@10 -- # set +x 00:21:41.746 17:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.746 17:31:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.746 17:31:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:42.686 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:42.686 17:31:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:42.686 17:31:02 -- common/autotest_common.sh@1208 -- # local i=0 00:21:42.686 17:31:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:42.686 17:31:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:21:42.686 17:31:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:42.686 17:31:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:21:42.686 17:31:02 -- common/autotest_common.sh@1220 -- # return 0 00:21:42.686 17:31:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:42.686 17:31:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.686 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:21:42.686 17:31:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.686 17:31:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.686 17:31:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:43.625 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:43.625 17:31:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:43.625 17:31:03 -- common/autotest_common.sh@1208 -- # local i=0 00:21:43.625 17:31:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:43.625 17:31:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:21:43.625 17:31:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:43.625 17:31:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:21:43.625 17:31:03 -- common/autotest_common.sh@1220 -- # return 0 00:21:43.625 17:31:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:43.625 17:31:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.625 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:21:43.625 17:31:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.625 17:31:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:43.625 17:31:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:44.563 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:44.563 17:31:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:44.563 17:31:04 -- common/autotest_common.sh@1208 -- # local i=0 00:21:44.563 17:31:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:44.564 17:31:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:21:44.564 17:31:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:44.564 17:31:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:21:44.564 17:31:04 -- common/autotest_common.sh@1220 -- # return 0 00:21:44.564 17:31:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:44.564 17:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.564 17:31:04 -- common/autotest_common.sh@10 -- # set +x 00:21:44.564 17:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.564 17:31:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.564 17:31:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:45.502 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:45.502 17:31:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:45.502 17:31:05 -- common/autotest_common.sh@1208 -- # local i=0 00:21:45.502 17:31:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:45.502 17:31:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:21:45.502 17:31:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:21:45.502 17:31:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:45.502 17:31:05 -- common/autotest_common.sh@1220 -- # return 0 00:21:45.502 17:31:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:45.502 17:31:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.502 17:31:05 -- common/autotest_common.sh@10 -- # set +x 00:21:45.502 17:31:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.502 17:31:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.502 17:31:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:46.882 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:46.882 17:31:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:46.882 17:31:06 -- common/autotest_common.sh@1208 -- # local i=0 00:21:46.882 17:31:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:46.882 17:31:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:21:46.882 17:31:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:46.882 17:31:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:21:46.882 17:31:06 -- common/autotest_common.sh@1220 -- # return 0 00:21:46.882 17:31:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:46.882 17:31:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.882 17:31:06 -- common/autotest_common.sh@10 -- # set +x 00:21:46.882 17:31:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.882 17:31:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.882 17:31:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:47.451 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:47.451 17:31:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:47.451 17:31:07 -- common/autotest_common.sh@1208 -- # local i=0 00:21:47.451 17:31:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:47.451 17:31:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:21:47.710 17:31:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:21:47.710 17:31:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:47.710 17:31:07 -- common/autotest_common.sh@1220 -- # return 0 00:21:47.710 17:31:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:47.710 17:31:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.710 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:21:47.710 17:31:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.710 17:31:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.710 17:31:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:48.649 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:48.649 17:31:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:48.649 17:31:08 -- common/autotest_common.sh@1208 -- # local i=0 00:21:48.649 17:31:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:48.649 17:31:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:21:48.649 17:31:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:48.649 17:31:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:21:48.649 17:31:08 -- common/autotest_common.sh@1220 -- # return 0 00:21:48.649 17:31:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:48.649 17:31:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.649 17:31:08 -- common/autotest_common.sh@10 -- # set +x 00:21:48.649 17:31:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.649 17:31:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.649 17:31:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:49.588 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:49.588 17:31:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:49.588 17:31:09 -- common/autotest_common.sh@1208 -- # local i=0 00:21:49.588 17:31:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:49.588 17:31:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:21:49.588 17:31:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:21:49.588 17:31:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:49.588 17:31:09 -- common/autotest_common.sh@1220 -- # return 0 00:21:49.588 17:31:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:49.588 17:31:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.588 17:31:09 -- common/autotest_common.sh@10 -- # set +x 00:21:49.588 17:31:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.588 17:31:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.588 17:31:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:50.526 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:50.526 17:31:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:50.526 17:31:10 -- common/autotest_common.sh@1208 -- # local i=0 00:21:50.526 17:31:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:50.526 17:31:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:21:50.526 17:31:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:21:50.526 17:31:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:50.526 17:31:10 -- common/autotest_common.sh@1220 -- # return 0 00:21:50.526 17:31:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:50.526 17:31:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.526 17:31:10 -- common/autotest_common.sh@10 -- # set +x 00:21:50.526 17:31:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.526 17:31:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.526 17:31:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:51.463 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:51.463 17:31:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:51.463 17:31:11 -- common/autotest_common.sh@1208 -- # local i=0 00:21:51.463 17:31:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:51.463 17:31:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:21:51.722 17:31:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:51.722 17:31:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:21:51.722 17:31:11 -- common/autotest_common.sh@1220 -- # return 0 00:21:51.722 17:31:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:51.722 17:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.722 17:31:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.722 17:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.722 17:31:11 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:51.722 17:31:11 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:51.722 17:31:11 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:51.722 17:31:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:51.722 17:31:11 -- nvmf/common.sh@116 -- # sync 00:21:51.722 17:31:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:51.722 17:31:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:51.722 17:31:11 -- nvmf/common.sh@119 -- # set +e 00:21:51.722 17:31:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:51.722 17:31:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:51.722 rmmod nvme_rdma 00:21:51.722 rmmod nvme_fabrics 00:21:51.722 17:31:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:51.722 17:31:11 -- nvmf/common.sh@123 -- # set -e 00:21:51.722 17:31:11 -- nvmf/common.sh@124 -- # return 0 00:21:51.722 17:31:11 -- nvmf/common.sh@477 -- # '[' -n 2743301 ']' 00:21:51.722 17:31:11 -- nvmf/common.sh@478 -- # killprocess 2743301 00:21:51.722 17:31:11 -- common/autotest_common.sh@936 -- # '[' -z 2743301 ']' 00:21:51.722 17:31:11 -- common/autotest_common.sh@940 -- # kill -0 2743301 00:21:51.722 17:31:11 -- common/autotest_common.sh@941 -- # uname 00:21:51.722 17:31:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.722 17:31:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2743301 00:21:51.722 17:31:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:51.722 17:31:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:51.722 17:31:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2743301' 00:21:51.722 killing process with pid 2743301 00:21:51.722 17:31:11 -- common/autotest_common.sh@955 -- # kill 2743301 00:21:51.722 17:31:11 -- common/autotest_common.sh@960 -- # wait 2743301 00:21:52.291 17:31:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:52.291 17:31:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:52.291 00:21:52.291 real 1m15.097s 00:21:52.291 user 4m54.649s 00:21:52.291 sys 0m19.099s 00:21:52.291 17:31:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:52.291 17:31:11 -- common/autotest_common.sh@10 -- # set +x 00:21:52.292 ************************************ 00:21:52.292 END TEST nvmf_multiconnection 00:21:52.292 ************************************ 00:21:52.292 17:31:11 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:52.292 17:31:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:52.292 17:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:52.292 17:31:11 -- common/autotest_common.sh@10 -- # set +x 00:21:52.292 ************************************ 00:21:52.292 START TEST nvmf_initiator_timeout 00:21:52.292 ************************************ 00:21:52.292 17:31:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:52.292 * Looking for test storage... 00:21:52.292 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:52.292 17:31:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:52.292 17:31:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:52.292 17:31:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:52.552 17:31:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:52.552 17:31:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:52.552 17:31:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:52.552 17:31:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:52.552 17:31:12 -- scripts/common.sh@335 -- # IFS=.-: 00:21:52.552 17:31:12 -- scripts/common.sh@335 -- # read -ra ver1 00:21:52.552 17:31:12 -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.552 17:31:12 -- scripts/common.sh@336 -- # read -ra ver2 00:21:52.552 17:31:12 -- scripts/common.sh@337 -- # local 'op=<' 00:21:52.552 17:31:12 -- scripts/common.sh@339 -- # ver1_l=2 00:21:52.552 17:31:12 -- scripts/common.sh@340 -- # ver2_l=1 00:21:52.552 17:31:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:52.552 17:31:12 -- scripts/common.sh@343 -- # case "$op" in 00:21:52.552 17:31:12 -- scripts/common.sh@344 -- # : 1 00:21:52.552 17:31:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:52.552 17:31:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.552 17:31:12 -- scripts/common.sh@364 -- # decimal 1 00:21:52.552 17:31:12 -- scripts/common.sh@352 -- # local d=1 00:21:52.552 17:31:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.552 17:31:12 -- scripts/common.sh@354 -- # echo 1 00:21:52.552 17:31:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:52.552 17:31:12 -- scripts/common.sh@365 -- # decimal 2 00:21:52.552 17:31:12 -- scripts/common.sh@352 -- # local d=2 00:21:52.552 17:31:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.552 17:31:12 -- scripts/common.sh@354 -- # echo 2 00:21:52.552 17:31:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:52.552 17:31:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:52.552 17:31:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:52.552 17:31:12 -- scripts/common.sh@367 -- # return 0 00:21:52.552 17:31:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.552 17:31:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:52.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.552 --rc genhtml_branch_coverage=1 00:21:52.552 --rc genhtml_function_coverage=1 00:21:52.552 --rc genhtml_legend=1 00:21:52.552 --rc geninfo_all_blocks=1 00:21:52.552 --rc geninfo_unexecuted_blocks=1 00:21:52.552 00:21:52.552 ' 00:21:52.552 17:31:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:52.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.552 --rc genhtml_branch_coverage=1 00:21:52.552 --rc genhtml_function_coverage=1 00:21:52.552 --rc genhtml_legend=1 00:21:52.552 --rc geninfo_all_blocks=1 00:21:52.552 --rc geninfo_unexecuted_blocks=1 00:21:52.552 00:21:52.552 ' 00:21:52.552 17:31:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:52.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.552 --rc genhtml_branch_coverage=1 00:21:52.552 --rc genhtml_function_coverage=1 00:21:52.552 --rc genhtml_legend=1 00:21:52.552 --rc geninfo_all_blocks=1 00:21:52.552 --rc geninfo_unexecuted_blocks=1 00:21:52.552 00:21:52.552 ' 00:21:52.552 17:31:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:52.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.552 --rc genhtml_branch_coverage=1 00:21:52.552 --rc genhtml_function_coverage=1 00:21:52.552 --rc genhtml_legend=1 00:21:52.552 --rc geninfo_all_blocks=1 00:21:52.552 --rc geninfo_unexecuted_blocks=1 00:21:52.552 00:21:52.552 ' 00:21:52.552 17:31:12 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.552 17:31:12 -- nvmf/common.sh@7 -- # uname -s 00:21:52.552 17:31:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.552 17:31:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.552 17:31:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.552 17:31:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.552 17:31:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.552 17:31:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.552 17:31:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.552 17:31:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.552 17:31:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.552 17:31:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.552 17:31:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.552 17:31:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:52.552 17:31:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.552 17:31:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.552 17:31:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.552 17:31:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:52.552 17:31:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.552 17:31:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.552 17:31:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.552 17:31:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.552 17:31:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.553 17:31:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.553 17:31:12 -- paths/export.sh@5 -- # export PATH 00:21:52.553 17:31:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.553 17:31:12 -- nvmf/common.sh@46 -- # : 0 00:21:52.553 17:31:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:52.553 17:31:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:52.553 17:31:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:52.553 17:31:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.553 17:31:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.553 17:31:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:52.553 17:31:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:52.553 17:31:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:52.553 17:31:12 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.553 17:31:12 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.553 17:31:12 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:52.553 17:31:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:52.553 17:31:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.553 17:31:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:52.553 17:31:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:52.553 17:31:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:52.553 17:31:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.553 17:31:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.553 17:31:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.553 17:31:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:52.553 17:31:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:52.553 17:31:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:52.553 17:31:12 -- common/autotest_common.sh@10 -- # set +x 00:21:59.126 17:31:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:59.126 17:31:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:59.126 17:31:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:59.126 17:31:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:59.126 17:31:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:59.126 17:31:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:59.126 17:31:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:59.126 17:31:18 -- nvmf/common.sh@294 -- # net_devs=() 00:21:59.126 17:31:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:59.126 17:31:18 -- nvmf/common.sh@295 -- # e810=() 00:21:59.126 17:31:18 -- nvmf/common.sh@295 -- # local -ga e810 00:21:59.126 17:31:18 -- nvmf/common.sh@296 -- # x722=() 00:21:59.126 17:31:18 -- nvmf/common.sh@296 -- # local -ga x722 00:21:59.126 17:31:18 -- nvmf/common.sh@297 -- # mlx=() 00:21:59.126 17:31:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:59.126 17:31:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.126 17:31:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:59.126 17:31:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:59.126 17:31:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:59.126 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:59.126 17:31:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.126 17:31:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:59.126 17:31:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:59.126 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:59.126 17:31:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.126 17:31:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:59.126 17:31:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:59.126 17:31:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.126 17:31:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:59.126 17:31:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.126 17:31:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:59.126 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:59.126 17:31:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:59.126 17:31:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.126 17:31:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:59.126 17:31:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.126 17:31:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:59.126 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:59.126 17:31:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.126 17:31:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:59.126 17:31:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:59.126 17:31:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:59.126 17:31:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:59.126 17:31:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:59.126 17:31:18 -- nvmf/common.sh@57 -- # uname 00:21:59.126 17:31:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:59.126 17:31:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:59.126 17:31:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:59.126 17:31:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:59.126 17:31:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:59.126 17:31:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:59.126 17:31:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:59.126 17:31:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:59.126 17:31:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:59.126 17:31:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:59.126 17:31:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:59.126 17:31:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.126 17:31:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:59.126 17:31:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:59.127 17:31:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:59.127 17:31:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:59.127 17:31:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@104 -- # continue 2 00:21:59.127 17:31:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@104 -- # continue 2 00:21:59.127 17:31:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:59.127 17:31:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:59.127 17:31:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:59.127 17:31:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:59.127 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.127 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:59.127 altname enp217s0f0np0 00:21:59.127 altname ens818f0np0 00:21:59.127 inet 192.168.100.8/24 scope global mlx_0_0 00:21:59.127 valid_lft forever preferred_lft forever 00:21:59.127 17:31:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:59.127 17:31:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:59.127 17:31:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:59.127 17:31:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:59.127 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.127 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:59.127 altname enp217s0f1np1 00:21:59.127 altname ens818f1np1 00:21:59.127 inet 192.168.100.9/24 scope global mlx_0_1 00:21:59.127 valid_lft forever preferred_lft forever 00:21:59.127 17:31:18 -- nvmf/common.sh@410 -- # return 0 00:21:59.127 17:31:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.127 17:31:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:59.127 17:31:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:59.127 17:31:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:59.127 17:31:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.127 17:31:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:59.127 17:31:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:59.127 17:31:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:59.127 17:31:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:59.127 17:31:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@104 -- # continue 2 00:21:59.127 17:31:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.127 17:31:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:59.127 17:31:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@104 -- # continue 2 00:21:59.127 17:31:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:59.127 17:31:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:59.127 17:31:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:59.127 17:31:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:59.127 17:31:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:59.127 17:31:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:59.127 192.168.100.9' 00:21:59.127 17:31:18 -- nvmf/common.sh@445 -- # head -n 1 00:21:59.127 17:31:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:59.127 192.168.100.9' 00:21:59.127 17:31:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:59.127 17:31:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:59.127 192.168.100.9' 00:21:59.127 17:31:18 -- nvmf/common.sh@446 -- # head -n 1 00:21:59.127 17:31:18 -- nvmf/common.sh@446 -- # tail -n +2 00:21:59.127 17:31:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:59.127 17:31:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:59.127 17:31:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:59.127 17:31:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:59.127 17:31:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:59.127 17:31:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:59.127 17:31:18 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:59.127 17:31:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.127 17:31:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.127 17:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:59.127 17:31:18 -- nvmf/common.sh@469 -- # nvmfpid=2758889 00:21:59.127 17:31:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.127 17:31:18 -- nvmf/common.sh@470 -- # waitforlisten 2758889 00:21:59.127 17:31:18 -- common/autotest_common.sh@829 -- # '[' -z 2758889 ']' 00:21:59.127 17:31:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.127 17:31:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.127 17:31:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.127 17:31:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.127 17:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:59.127 [2024-11-09 17:31:18.854248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.127 [2024-11-09 17:31:18.854294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.127 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.387 [2024-11-09 17:31:18.923662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.387 [2024-11-09 17:31:18.997682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.387 [2024-11-09 17:31:18.997786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.387 [2024-11-09 17:31:18.997796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.387 [2024-11-09 17:31:18.997804] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.387 [2024-11-09 17:31:18.997849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.387 [2024-11-09 17:31:18.997947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.387 [2024-11-09 17:31:18.998029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.387 [2024-11-09 17:31:18.998031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.955 17:31:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.955 17:31:19 -- common/autotest_common.sh@862 -- # return 0 00:21:59.955 17:31:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:59.955 17:31:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.955 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:21:59.955 17:31:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.955 17:31:19 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:59.955 17:31:19 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.955 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.955 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 Malloc0 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:00.214 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.214 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 Delay0 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:00.214 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.214 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 [2024-11-09 17:31:19.778324] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x107fe40/0xeee1c0) succeed. 00:22:00.214 [2024-11-09 17:31:19.787856] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1081250/0xf6e200) succeed. 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:00.214 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.214 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:00.214 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.214 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:00.214 17:31:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.214 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 [2024-11-09 17:31:19.930645] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:00.214 17:31:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.214 17:31:19 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:01.149 17:31:20 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:01.149 17:31:20 -- common/autotest_common.sh@1187 -- # local i=0 00:22:01.149 17:31:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.408 17:31:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:01.408 17:31:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:03.370 17:31:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:03.370 17:31:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:03.370 17:31:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:03.370 17:31:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:03.370 17:31:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.370 17:31:22 -- common/autotest_common.sh@1197 -- # return 0 00:22:03.370 17:31:22 -- target/initiator_timeout.sh@35 -- # fio_pid=2759494 00:22:03.370 17:31:22 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:03.370 17:31:22 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:03.370 [global] 00:22:03.370 thread=1 00:22:03.370 invalidate=1 00:22:03.370 rw=write 00:22:03.370 time_based=1 00:22:03.370 runtime=60 00:22:03.370 ioengine=libaio 00:22:03.370 direct=1 00:22:03.370 bs=4096 00:22:03.370 iodepth=1 00:22:03.370 norandommap=0 00:22:03.370 numjobs=1 00:22:03.370 00:22:03.370 verify_dump=1 00:22:03.370 verify_backlog=512 00:22:03.370 verify_state_save=0 00:22:03.370 do_verify=1 00:22:03.370 verify=crc32c-intel 00:22:03.370 [job0] 00:22:03.370 filename=/dev/nvme0n1 00:22:03.370 Could not set queue depth (nvme0n1) 00:22:03.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.629 fio-3.35 00:22:03.629 Starting 1 thread 00:22:06.921 17:31:25 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:06.921 17:31:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.921 17:31:25 -- common/autotest_common.sh@10 -- # set +x 00:22:06.921 true 00:22:06.921 17:31:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.921 17:31:25 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:06.921 17:31:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.921 17:31:25 -- common/autotest_common.sh@10 -- # set +x 00:22:06.921 true 00:22:06.921 17:31:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.921 17:31:25 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:06.921 17:31:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.921 17:31:25 -- common/autotest_common.sh@10 -- # set +x 00:22:06.921 true 00:22:06.921 17:31:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.921 17:31:25 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:06.921 17:31:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.921 17:31:25 -- common/autotest_common.sh@10 -- # set +x 00:22:06.921 true 00:22:06.921 17:31:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.921 17:31:25 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:09.457 17:31:28 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:09.457 17:31:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.457 17:31:28 -- common/autotest_common.sh@10 -- # set +x 00:22:09.457 true 00:22:09.457 17:31:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.457 17:31:29 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:09.457 17:31:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.457 17:31:29 -- common/autotest_common.sh@10 -- # set +x 00:22:09.457 true 00:22:09.457 17:31:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.457 17:31:29 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:09.457 17:31:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.457 17:31:29 -- common/autotest_common.sh@10 -- # set +x 00:22:09.457 true 00:22:09.457 17:31:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.457 17:31:29 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:09.457 17:31:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.457 17:31:29 -- common/autotest_common.sh@10 -- # set +x 00:22:09.457 true 00:22:09.457 17:31:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.457 17:31:29 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:09.457 17:31:29 -- target/initiator_timeout.sh@54 -- # wait 2759494 00:23:05.696 00:23:05.696 job0: (groupid=0, jobs=1): err= 0: pid=2759750: Sat Nov 9 17:32:23 2024 00:23:05.696 read: IOPS=1254, BW=5018KiB/s (5138kB/s)(294MiB/60000msec) 00:23:05.696 slat (usec): min=8, max=13579, avg= 9.35, stdev=57.26 00:23:05.697 clat (usec): min=77, max=42423k, avg=668.48, stdev=154636.12 00:23:05.697 lat (usec): min=95, max=42423k, avg=677.83, stdev=154636.14 00:23:05.697 clat percentiles (usec): 00:23:05.697 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:23:05.697 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:23:05.697 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 117], 00:23:05.697 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 135], 00:23:05.697 | 99.99th=[ 253] 00:23:05.697 write: IOPS=1259, BW=5037KiB/s (5157kB/s)(295MiB/60000msec); 0 zone resets 00:23:05.697 slat (usec): min=10, max=330, avg=11.86, stdev= 2.25 00:23:05.697 clat (usec): min=66, max=383, avg=101.90, stdev= 7.28 00:23:05.697 lat (usec): min=94, max=542, avg=113.76, stdev= 7.73 00:23:05.697 clat percentiles (usec): 00:23:05.697 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:23:05.697 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:23:05.697 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:23:05.697 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 145], 00:23:05.697 | 99.99th=[ 293] 00:23:05.697 bw ( KiB/s): min= 4096, max=19472, per=100.00%, avg=16852.11, stdev=2365.05, samples=35 00:23:05.697 iops : min= 1024, max= 4868, avg=4213.03, stdev=591.26, samples=35 00:23:05.697 lat (usec) : 100=32.31%, 250=67.68%, 500=0.01% 00:23:05.697 lat (msec) : 2=0.01%, >=2000=0.01% 00:23:05.697 cpu : usr=1.95%, sys=3.26%, ctx=150821, majf=0, minf=143 00:23:05.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.697 issued rwts: total=75264,75549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:05.697 00:23:05.697 Run status group 0 (all jobs): 00:23:05.697 READ: bw=5018KiB/s (5138kB/s), 5018KiB/s-5018KiB/s (5138kB/s-5138kB/s), io=294MiB (308MB), run=60000-60000msec 00:23:05.697 WRITE: bw=5037KiB/s (5157kB/s), 5037KiB/s-5037KiB/s (5157kB/s-5157kB/s), io=295MiB (309MB), run=60000-60000msec 00:23:05.697 00:23:05.697 Disk stats (read/write): 00:23:05.697 nvme0n1: ios=75049/75206, merge=0/0, ticks=7155/7092, in_queue=14247, util=99.74% 00:23:05.697 17:32:23 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:05.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:05.697 17:32:24 -- common/autotest_common.sh@1208 -- # local i=0 00:23:05.697 17:32:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:05.697 17:32:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:05.697 17:32:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:05.697 17:32:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:05.697 17:32:24 -- common/autotest_common.sh@1220 -- # return 0 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:05.697 nvmf hotplug test: fio successful as expected 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.697 17:32:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.697 17:32:24 -- common/autotest_common.sh@10 -- # set +x 00:23:05.697 17:32:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:05.697 17:32:24 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:05.697 17:32:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:05.697 17:32:24 -- nvmf/common.sh@116 -- # sync 00:23:05.697 17:32:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:05.697 17:32:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:05.697 17:32:24 -- nvmf/common.sh@119 -- # set +e 00:23:05.697 17:32:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:05.697 17:32:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:05.697 rmmod nvme_rdma 00:23:05.697 rmmod nvme_fabrics 00:23:05.697 17:32:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:05.697 17:32:24 -- nvmf/common.sh@123 -- # set -e 00:23:05.697 17:32:24 -- nvmf/common.sh@124 -- # return 0 00:23:05.697 17:32:24 -- nvmf/common.sh@477 -- # '[' -n 2758889 ']' 00:23:05.697 17:32:24 -- nvmf/common.sh@478 -- # killprocess 2758889 00:23:05.697 17:32:24 -- common/autotest_common.sh@936 -- # '[' -z 2758889 ']' 00:23:05.697 17:32:24 -- common/autotest_common.sh@940 -- # kill -0 2758889 00:23:05.697 17:32:24 -- common/autotest_common.sh@941 -- # uname 00:23:05.697 17:32:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.697 17:32:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2758889 00:23:05.697 17:32:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:05.697 17:32:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:05.697 17:32:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2758889' 00:23:05.697 killing process with pid 2758889 00:23:05.697 17:32:24 -- common/autotest_common.sh@955 -- # kill 2758889 00:23:05.697 17:32:24 -- common/autotest_common.sh@960 -- # wait 2758889 00:23:05.697 17:32:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:05.697 17:32:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:05.697 00:23:05.697 real 1m12.914s 00:23:05.697 user 4m34.411s 00:23:05.697 sys 0m7.832s 00:23:05.697 17:32:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:05.697 17:32:24 -- common/autotest_common.sh@10 -- # set +x 00:23:05.697 ************************************ 00:23:05.697 END TEST nvmf_initiator_timeout 00:23:05.697 ************************************ 00:23:05.697 17:32:24 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:05.697 17:32:24 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:05.697 17:32:24 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:05.697 17:32:24 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:05.697 17:32:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:05.697 17:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.697 17:32:24 -- common/autotest_common.sh@10 -- # set +x 00:23:05.697 ************************************ 00:23:05.697 START TEST nvmf_shutdown 00:23:05.697 ************************************ 00:23:05.697 17:32:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:05.697 * Looking for test storage... 00:23:05.697 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:05.697 17:32:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:05.697 17:32:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:05.697 17:32:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:05.697 17:32:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:05.697 17:32:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:05.697 17:32:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:05.697 17:32:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:05.697 17:32:25 -- scripts/common.sh@335 -- # IFS=.-: 00:23:05.697 17:32:25 -- scripts/common.sh@335 -- # read -ra ver1 00:23:05.697 17:32:25 -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.697 17:32:25 -- scripts/common.sh@336 -- # read -ra ver2 00:23:05.697 17:32:25 -- scripts/common.sh@337 -- # local 'op=<' 00:23:05.697 17:32:25 -- scripts/common.sh@339 -- # ver1_l=2 00:23:05.697 17:32:25 -- scripts/common.sh@340 -- # ver2_l=1 00:23:05.697 17:32:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:05.697 17:32:25 -- scripts/common.sh@343 -- # case "$op" in 00:23:05.697 17:32:25 -- scripts/common.sh@344 -- # : 1 00:23:05.697 17:32:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:05.697 17:32:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.697 17:32:25 -- scripts/common.sh@364 -- # decimal 1 00:23:05.697 17:32:25 -- scripts/common.sh@352 -- # local d=1 00:23:05.697 17:32:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.697 17:32:25 -- scripts/common.sh@354 -- # echo 1 00:23:05.697 17:32:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:05.697 17:32:25 -- scripts/common.sh@365 -- # decimal 2 00:23:05.697 17:32:25 -- scripts/common.sh@352 -- # local d=2 00:23:05.697 17:32:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.697 17:32:25 -- scripts/common.sh@354 -- # echo 2 00:23:05.697 17:32:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:05.697 17:32:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:05.697 17:32:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:05.697 17:32:25 -- scripts/common.sh@367 -- # return 0 00:23:05.697 17:32:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.697 17:32:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:05.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.697 --rc genhtml_branch_coverage=1 00:23:05.697 --rc genhtml_function_coverage=1 00:23:05.697 --rc genhtml_legend=1 00:23:05.697 --rc geninfo_all_blocks=1 00:23:05.697 --rc geninfo_unexecuted_blocks=1 00:23:05.697 00:23:05.697 ' 00:23:05.697 17:32:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:05.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.697 --rc genhtml_branch_coverage=1 00:23:05.697 --rc genhtml_function_coverage=1 00:23:05.697 --rc genhtml_legend=1 00:23:05.697 --rc geninfo_all_blocks=1 00:23:05.697 --rc geninfo_unexecuted_blocks=1 00:23:05.697 00:23:05.697 ' 00:23:05.697 17:32:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:05.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.697 --rc genhtml_branch_coverage=1 00:23:05.697 --rc genhtml_function_coverage=1 00:23:05.697 --rc genhtml_legend=1 00:23:05.697 --rc geninfo_all_blocks=1 00:23:05.697 --rc geninfo_unexecuted_blocks=1 00:23:05.697 00:23:05.697 ' 00:23:05.697 17:32:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:05.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.697 --rc genhtml_branch_coverage=1 00:23:05.697 --rc genhtml_function_coverage=1 00:23:05.697 --rc genhtml_legend=1 00:23:05.697 --rc geninfo_all_blocks=1 00:23:05.697 --rc geninfo_unexecuted_blocks=1 00:23:05.697 00:23:05.698 ' 00:23:05.698 17:32:25 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.698 17:32:25 -- nvmf/common.sh@7 -- # uname -s 00:23:05.698 17:32:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.698 17:32:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.698 17:32:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.698 17:32:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.698 17:32:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.698 17:32:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.698 17:32:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.698 17:32:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.698 17:32:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.698 17:32:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.698 17:32:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:05.698 17:32:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:05.698 17:32:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.698 17:32:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.698 17:32:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.698 17:32:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:05.698 17:32:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.698 17:32:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.698 17:32:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.698 17:32:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.698 17:32:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.698 17:32:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.698 17:32:25 -- paths/export.sh@5 -- # export PATH 00:23:05.698 17:32:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.698 17:32:25 -- nvmf/common.sh@46 -- # : 0 00:23:05.698 17:32:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:05.698 17:32:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:05.698 17:32:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:05.698 17:32:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.698 17:32:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.698 17:32:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:05.698 17:32:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:05.698 17:32:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:05.698 17:32:25 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.698 17:32:25 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.698 17:32:25 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:05.698 17:32:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:05.698 17:32:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.698 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:23:05.698 ************************************ 00:23:05.698 START TEST nvmf_shutdown_tc1 00:23:05.698 ************************************ 00:23:05.698 17:32:25 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:05.698 17:32:25 -- target/shutdown.sh@74 -- # starttarget 00:23:05.698 17:32:25 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:05.698 17:32:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:05.698 17:32:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.698 17:32:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:05.698 17:32:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:05.698 17:32:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:05.698 17:32:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.698 17:32:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.698 17:32:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.698 17:32:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:05.698 17:32:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:05.698 17:32:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:05.698 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:23:12.273 17:32:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:12.273 17:32:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:12.273 17:32:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:12.273 17:32:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:12.273 17:32:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:12.273 17:32:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:12.273 17:32:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:12.273 17:32:31 -- nvmf/common.sh@294 -- # net_devs=() 00:23:12.273 17:32:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:12.273 17:32:31 -- nvmf/common.sh@295 -- # e810=() 00:23:12.273 17:32:31 -- nvmf/common.sh@295 -- # local -ga e810 00:23:12.273 17:32:31 -- nvmf/common.sh@296 -- # x722=() 00:23:12.273 17:32:31 -- nvmf/common.sh@296 -- # local -ga x722 00:23:12.273 17:32:31 -- nvmf/common.sh@297 -- # mlx=() 00:23:12.273 17:32:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:12.273 17:32:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.273 17:32:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:12.273 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:12.273 17:32:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.273 17:32:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:12.273 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:12.273 17:32:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.273 17:32:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.273 17:32:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.273 17:32:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:12.273 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:12.273 17:32:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.273 17:32:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.273 17:32:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:12.273 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:12.273 17:32:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.273 17:32:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:12.273 17:32:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:12.273 17:32:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:12.273 17:32:31 -- nvmf/common.sh@57 -- # uname 00:23:12.273 17:32:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:12.273 17:32:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:12.273 17:32:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:12.273 17:32:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:12.273 17:32:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:12.273 17:32:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:12.273 17:32:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:12.273 17:32:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:12.273 17:32:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:12.273 17:32:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:12.273 17:32:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:12.273 17:32:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.273 17:32:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:12.273 17:32:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:12.273 17:32:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.273 17:32:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:12.273 17:32:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:12.273 17:32:31 -- nvmf/common.sh@104 -- # continue 2 00:23:12.273 17:32:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.273 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:12.273 17:32:31 -- nvmf/common.sh@104 -- # continue 2 00:23:12.273 17:32:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:12.273 17:32:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:12.273 17:32:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.273 17:32:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:12.273 17:32:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:12.273 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.273 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:12.273 altname enp217s0f0np0 00:23:12.273 altname ens818f0np0 00:23:12.273 inet 192.168.100.8/24 scope global mlx_0_0 00:23:12.273 valid_lft forever preferred_lft forever 00:23:12.273 17:32:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:12.273 17:32:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:12.273 17:32:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:12.273 17:32:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.273 17:32:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:12.273 17:32:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:12.273 17:32:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:12.273 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.273 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:12.273 altname enp217s0f1np1 00:23:12.273 altname ens818f1np1 00:23:12.273 inet 192.168.100.9/24 scope global mlx_0_1 00:23:12.274 valid_lft forever preferred_lft forever 00:23:12.274 17:32:31 -- nvmf/common.sh@410 -- # return 0 00:23:12.274 17:32:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:12.274 17:32:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:12.274 17:32:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:12.274 17:32:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:12.274 17:32:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:12.274 17:32:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.274 17:32:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:12.274 17:32:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:12.274 17:32:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.274 17:32:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:12.274 17:32:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.274 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.274 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.274 17:32:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:12.274 17:32:31 -- nvmf/common.sh@104 -- # continue 2 00:23:12.274 17:32:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.274 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.274 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.274 17:32:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.274 17:32:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.274 17:32:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:12.274 17:32:31 -- nvmf/common.sh@104 -- # continue 2 00:23:12.274 17:32:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:12.274 17:32:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:12.274 17:32:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.274 17:32:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:12.274 17:32:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:12.274 17:32:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.274 17:32:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.274 17:32:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:12.274 192.168.100.9' 00:23:12.274 17:32:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:12.274 192.168.100.9' 00:23:12.274 17:32:31 -- nvmf/common.sh@445 -- # head -n 1 00:23:12.274 17:32:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:12.274 17:32:31 -- nvmf/common.sh@446 -- # tail -n +2 00:23:12.274 17:32:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:12.274 192.168.100.9' 00:23:12.274 17:32:31 -- nvmf/common.sh@446 -- # head -n 1 00:23:12.274 17:32:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:12.274 17:32:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:12.274 17:32:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:12.274 17:32:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:12.274 17:32:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:12.274 17:32:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:12.274 17:32:31 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:12.274 17:32:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:12.274 17:32:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.274 17:32:31 -- common/autotest_common.sh@10 -- # set +x 00:23:12.274 17:32:31 -- nvmf/common.sh@469 -- # nvmfpid=2773408 00:23:12.274 17:32:31 -- nvmf/common.sh@470 -- # waitforlisten 2773408 00:23:12.274 17:32:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:12.274 17:32:31 -- common/autotest_common.sh@829 -- # '[' -z 2773408 ']' 00:23:12.274 17:32:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.274 17:32:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.274 17:32:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.274 17:32:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.274 17:32:31 -- common/autotest_common.sh@10 -- # set +x 00:23:12.274 [2024-11-09 17:32:31.917542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:12.274 [2024-11-09 17:32:31.917589] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.274 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.274 [2024-11-09 17:32:31.986128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.533 [2024-11-09 17:32:32.057181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:12.533 [2024-11-09 17:32:32.057291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.533 [2024-11-09 17:32:32.057301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.533 [2024-11-09 17:32:32.057309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.533 [2024-11-09 17:32:32.057418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.533 [2024-11-09 17:32:32.057501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.533 [2024-11-09 17:32:32.057613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.533 [2024-11-09 17:32:32.057614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:13.102 17:32:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.102 17:32:32 -- common/autotest_common.sh@862 -- # return 0 00:23:13.102 17:32:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:13.102 17:32:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.102 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:23:13.102 17:32:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.102 17:32:32 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:13.102 17:32:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.102 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:23:13.102 [2024-11-09 17:32:32.797188] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca6380/0x1caa870) succeed. 00:23:13.102 [2024-11-09 17:32:32.806418] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca7970/0x1cebf10) succeed. 00:23:13.362 17:32:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.362 17:32:32 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:13.362 17:32:32 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:13.362 17:32:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.362 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:23:13.362 17:32:32 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:13.362 17:32:32 -- target/shutdown.sh@28 -- # cat 00:23:13.362 17:32:32 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:13.362 17:32:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.362 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:23:13.362 Malloc1 00:23:13.362 [2024-11-09 17:32:33.033026] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:13.362 Malloc2 00:23:13.362 Malloc3 00:23:13.621 Malloc4 00:23:13.621 Malloc5 00:23:13.621 Malloc6 00:23:13.621 Malloc7 00:23:13.621 Malloc8 00:23:13.621 Malloc9 00:23:13.880 Malloc10 00:23:13.880 17:32:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.880 17:32:33 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:13.880 17:32:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.880 17:32:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 17:32:33 -- target/shutdown.sh@78 -- # perfpid=2773737 00:23:13.880 17:32:33 -- target/shutdown.sh@79 -- # waitforlisten 2773737 /var/tmp/bdevperf.sock 00:23:13.880 17:32:33 -- common/autotest_common.sh@829 -- # '[' -z 2773737 ']' 00:23:13.880 17:32:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.880 17:32:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.880 17:32:33 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:13.880 17:32:33 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:13.880 17:32:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.880 17:32:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.880 17:32:33 -- nvmf/common.sh@520 -- # config=() 00:23:13.880 17:32:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.880 17:32:33 -- nvmf/common.sh@520 -- # local subsystem config 00:23:13.880 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.880 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.880 { 00:23:13.880 "params": { 00:23:13.880 "name": "Nvme$subsystem", 00:23:13.880 "trtype": "$TEST_TRANSPORT", 00:23:13.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.880 "adrfam": "ipv4", 00:23:13.880 "trsvcid": "$NVMF_PORT", 00:23:13.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.880 "hdgst": ${hdgst:-false}, 00:23:13.880 "ddgst": ${ddgst:-false} 00:23:13.880 }, 00:23:13.880 "method": "bdev_nvme_attach_controller" 00:23:13.880 } 00:23:13.880 EOF 00:23:13.880 )") 00:23:13.880 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.880 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.880 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.880 { 00:23:13.880 "params": { 00:23:13.880 "name": "Nvme$subsystem", 00:23:13.880 "trtype": "$TEST_TRANSPORT", 00:23:13.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.880 "adrfam": "ipv4", 00:23:13.880 "trsvcid": "$NVMF_PORT", 00:23:13.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.880 "hdgst": ${hdgst:-false}, 00:23:13.880 "ddgst": ${ddgst:-false} 00:23:13.880 }, 00:23:13.880 "method": "bdev_nvme_attach_controller" 00:23:13.880 } 00:23:13.880 EOF 00:23:13.880 )") 00:23:13.880 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.880 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.880 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.880 { 00:23:13.880 "params": { 00:23:13.880 "name": "Nvme$subsystem", 00:23:13.880 "trtype": "$TEST_TRANSPORT", 00:23:13.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.880 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 [2024-11-09 17:32:33.519006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:13.881 [2024-11-09 17:32:33.519059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 17:32:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:13.881 { 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme$subsystem", 00:23:13.881 "trtype": "$TEST_TRANSPORT", 00:23:13.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "$NVMF_PORT", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.881 "hdgst": ${hdgst:-false}, 00:23:13.881 "ddgst": ${ddgst:-false} 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 } 00:23:13.881 EOF 00:23:13.881 )") 00:23:13.881 17:32:33 -- nvmf/common.sh@542 -- # cat 00:23:13.881 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.881 17:32:33 -- nvmf/common.sh@544 -- # jq . 00:23:13.881 17:32:33 -- nvmf/common.sh@545 -- # IFS=, 00:23:13.881 17:32:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme1", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme2", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme3", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme4", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme5", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme6", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.881 "method": "bdev_nvme_attach_controller" 00:23:13.881 },{ 00:23:13.881 "params": { 00:23:13.881 "name": "Nvme7", 00:23:13.881 "trtype": "rdma", 00:23:13.881 "traddr": "192.168.100.8", 00:23:13.881 "adrfam": "ipv4", 00:23:13.881 "trsvcid": "4420", 00:23:13.881 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:13.881 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:13.881 "hdgst": false, 00:23:13.881 "ddgst": false 00:23:13.881 }, 00:23:13.882 "method": "bdev_nvme_attach_controller" 00:23:13.882 },{ 00:23:13.882 "params": { 00:23:13.882 "name": "Nvme8", 00:23:13.882 "trtype": "rdma", 00:23:13.882 "traddr": "192.168.100.8", 00:23:13.882 "adrfam": "ipv4", 00:23:13.882 "trsvcid": "4420", 00:23:13.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:13.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:13.882 "hdgst": false, 00:23:13.882 "ddgst": false 00:23:13.882 }, 00:23:13.882 "method": "bdev_nvme_attach_controller" 00:23:13.882 },{ 00:23:13.882 "params": { 00:23:13.882 "name": "Nvme9", 00:23:13.882 "trtype": "rdma", 00:23:13.882 "traddr": "192.168.100.8", 00:23:13.882 "adrfam": "ipv4", 00:23:13.882 "trsvcid": "4420", 00:23:13.882 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:13.882 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:13.882 "hdgst": false, 00:23:13.882 "ddgst": false 00:23:13.882 }, 00:23:13.882 "method": "bdev_nvme_attach_controller" 00:23:13.882 },{ 00:23:13.882 "params": { 00:23:13.882 "name": "Nvme10", 00:23:13.882 "trtype": "rdma", 00:23:13.882 "traddr": "192.168.100.8", 00:23:13.882 "adrfam": "ipv4", 00:23:13.882 "trsvcid": "4420", 00:23:13.882 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:13.882 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:13.882 "hdgst": false, 00:23:13.882 "ddgst": false 00:23:13.882 }, 00:23:13.882 "method": "bdev_nvme_attach_controller" 00:23:13.882 }' 00:23:13.882 [2024-11-09 17:32:33.590153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.141 [2024-11-09 17:32:33.656871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.519 17:32:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.519 17:32:35 -- common/autotest_common.sh@862 -- # return 0 00:23:15.519 17:32:35 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:15.519 17:32:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.519 17:32:35 -- common/autotest_common.sh@10 -- # set +x 00:23:15.519 17:32:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.519 17:32:35 -- target/shutdown.sh@83 -- # kill -9 2773737 00:23:15.519 17:32:35 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:15.519 17:32:35 -- target/shutdown.sh@87 -- # sleep 1 00:23:16.461 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2773737 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:16.461 17:32:36 -- target/shutdown.sh@88 -- # kill -0 2773408 00:23:16.461 17:32:36 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:16.461 17:32:36 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:16.461 17:32:36 -- nvmf/common.sh@520 -- # config=() 00:23:16.461 17:32:36 -- nvmf/common.sh@520 -- # local subsystem config 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.461 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.461 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.461 { 00:23:16.461 "params": { 00:23:16.461 "name": "Nvme$subsystem", 00:23:16.461 "trtype": "$TEST_TRANSPORT", 00:23:16.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.461 "adrfam": "ipv4", 00:23:16.461 "trsvcid": "$NVMF_PORT", 00:23:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.461 "hdgst": ${hdgst:-false}, 00:23:16.461 "ddgst": ${ddgst:-false} 00:23:16.461 }, 00:23:16.461 "method": "bdev_nvme_attach_controller" 00:23:16.461 } 00:23:16.461 EOF 00:23:16.461 )") 00:23:16.462 [2024-11-09 17:32:36.077097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:16.462 [2024-11-09 17:32:36.077151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774097 ] 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.462 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.462 { 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme$subsystem", 00:23:16.462 "trtype": "$TEST_TRANSPORT", 00:23:16.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "$NVMF_PORT", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.462 "hdgst": ${hdgst:-false}, 00:23:16.462 "ddgst": ${ddgst:-false} 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 } 00:23:16.462 EOF 00:23:16.462 )") 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.462 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.462 { 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme$subsystem", 00:23:16.462 "trtype": "$TEST_TRANSPORT", 00:23:16.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "$NVMF_PORT", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.462 "hdgst": ${hdgst:-false}, 00:23:16.462 "ddgst": ${ddgst:-false} 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 } 00:23:16.462 EOF 00:23:16.462 )") 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.462 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.462 { 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme$subsystem", 00:23:16.462 "trtype": "$TEST_TRANSPORT", 00:23:16.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "$NVMF_PORT", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.462 "hdgst": ${hdgst:-false}, 00:23:16.462 "ddgst": ${ddgst:-false} 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 } 00:23:16.462 EOF 00:23:16.462 )") 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.462 17:32:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:16.462 { 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme$subsystem", 00:23:16.462 "trtype": "$TEST_TRANSPORT", 00:23:16.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "$NVMF_PORT", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.462 "hdgst": ${hdgst:-false}, 00:23:16.462 "ddgst": ${ddgst:-false} 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 } 00:23:16.462 EOF 00:23:16.462 )") 00:23:16.462 17:32:36 -- nvmf/common.sh@542 -- # cat 00:23:16.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.462 17:32:36 -- nvmf/common.sh@544 -- # jq . 00:23:16.462 17:32:36 -- nvmf/common.sh@545 -- # IFS=, 00:23:16.462 17:32:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme1", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme2", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme3", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme4", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme5", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme6", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme7", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme8", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme9", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 },{ 00:23:16.462 "params": { 00:23:16.462 "name": "Nvme10", 00:23:16.462 "trtype": "rdma", 00:23:16.462 "traddr": "192.168.100.8", 00:23:16.462 "adrfam": "ipv4", 00:23:16.462 "trsvcid": "4420", 00:23:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:16.462 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:16.462 "hdgst": false, 00:23:16.462 "ddgst": false 00:23:16.462 }, 00:23:16.462 "method": "bdev_nvme_attach_controller" 00:23:16.462 }' 00:23:16.462 [2024-11-09 17:32:36.149996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.462 [2024-11-09 17:32:36.218699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.401 Running I/O for 1 seconds... 00:23:18.780 00:23:18.780 Latency(us) 00:23:18.780 [2024-11-09T16:32:38.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme1n1 : 1.09 721.41 45.09 0.00 0.00 87754.28 7287.60 116601.65 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme2n1 : 1.09 727.14 45.45 0.00 0.00 86474.36 7549.75 111568.49 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme3n1 : 1.10 752.01 47.00 0.00 0.00 83128.60 7811.89 105277.03 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme4n1 : 1.10 752.24 47.01 0.00 0.00 82562.27 8021.61 74658.61 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme5n1 : 1.10 751.56 46.97 0.00 0.00 82152.59 8231.32 73400.32 00:23:18.780 [2024-11-09T16:32:38.550Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.780 Verification LBA range: start 0x0 length 0x400 00:23:18.780 Nvme6n1 : 1.10 750.87 46.93 0.00 0.00 81724.90 8441.04 71722.60 00:23:18.780 [2024-11-09T16:32:38.551Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.781 Verification LBA range: start 0x0 length 0x400 00:23:18.781 Nvme7n1 : 1.10 750.19 46.89 0.00 0.00 81294.86 8650.75 70464.31 00:23:18.781 [2024-11-09T16:32:38.551Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.781 Verification LBA range: start 0x0 length 0x400 00:23:18.781 Nvme8n1 : 1.10 749.51 46.84 0.00 0.00 80876.83 8860.47 72142.03 00:23:18.781 [2024-11-09T16:32:38.551Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.781 Verification LBA range: start 0x0 length 0x400 00:23:18.781 Nvme9n1 : 1.10 748.84 46.80 0.00 0.00 80456.63 9070.18 73819.75 00:23:18.781 [2024-11-09T16:32:38.551Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.781 Verification LBA range: start 0x0 length 0x400 00:23:18.781 Nvme10n1 : 1.10 553.24 34.58 0.00 0.00 108079.13 7602.18 325477.99 00:23:18.781 [2024-11-09T16:32:38.551Z] =================================================================================================================== 00:23:18.781 [2024-11-09T16:32:38.551Z] Total : 7257.01 453.56 0.00 0.00 84826.09 7287.60 325477.99 00:23:18.781 17:32:38 -- target/shutdown.sh@93 -- # stoptarget 00:23:18.781 17:32:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:18.781 17:32:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:18.781 17:32:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.781 17:32:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:18.781 17:32:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:18.781 17:32:38 -- nvmf/common.sh@116 -- # sync 00:23:18.781 17:32:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:18.781 17:32:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:18.781 17:32:38 -- nvmf/common.sh@119 -- # set +e 00:23:18.781 17:32:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:18.781 17:32:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:18.781 rmmod nvme_rdma 00:23:18.781 rmmod nvme_fabrics 00:23:19.040 17:32:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:19.040 17:32:38 -- nvmf/common.sh@123 -- # set -e 00:23:19.040 17:32:38 -- nvmf/common.sh@124 -- # return 0 00:23:19.040 17:32:38 -- nvmf/common.sh@477 -- # '[' -n 2773408 ']' 00:23:19.040 17:32:38 -- nvmf/common.sh@478 -- # killprocess 2773408 00:23:19.040 17:32:38 -- common/autotest_common.sh@936 -- # '[' -z 2773408 ']' 00:23:19.040 17:32:38 -- common/autotest_common.sh@940 -- # kill -0 2773408 00:23:19.040 17:32:38 -- common/autotest_common.sh@941 -- # uname 00:23:19.040 17:32:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.040 17:32:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2773408 00:23:19.040 17:32:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:19.040 17:32:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:19.040 17:32:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2773408' 00:23:19.040 killing process with pid 2773408 00:23:19.040 17:32:38 -- common/autotest_common.sh@955 -- # kill 2773408 00:23:19.040 17:32:38 -- common/autotest_common.sh@960 -- # wait 2773408 00:23:19.610 17:32:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:19.610 17:32:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:19.610 00:23:19.610 real 0m13.994s 00:23:19.610 user 0m33.414s 00:23:19.610 sys 0m6.313s 00:23:19.610 17:32:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:19.610 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 ************************************ 00:23:19.610 END TEST nvmf_shutdown_tc1 00:23:19.610 ************************************ 00:23:19.610 17:32:39 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:19.610 17:32:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:19.610 17:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.610 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 ************************************ 00:23:19.610 START TEST nvmf_shutdown_tc2 00:23:19.610 ************************************ 00:23:19.610 17:32:39 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:23:19.610 17:32:39 -- target/shutdown.sh@98 -- # starttarget 00:23:19.610 17:32:39 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:19.610 17:32:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:19.610 17:32:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.610 17:32:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:19.610 17:32:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:19.610 17:32:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:19.610 17:32:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.610 17:32:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.610 17:32:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.610 17:32:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:19.610 17:32:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:19.610 17:32:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:19.610 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 17:32:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:19.610 17:32:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:19.610 17:32:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:19.610 17:32:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:19.610 17:32:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:19.610 17:32:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:19.610 17:32:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:19.610 17:32:39 -- nvmf/common.sh@294 -- # net_devs=() 00:23:19.610 17:32:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:19.610 17:32:39 -- nvmf/common.sh@295 -- # e810=() 00:23:19.610 17:32:39 -- nvmf/common.sh@295 -- # local -ga e810 00:23:19.610 17:32:39 -- nvmf/common.sh@296 -- # x722=() 00:23:19.610 17:32:39 -- nvmf/common.sh@296 -- # local -ga x722 00:23:19.610 17:32:39 -- nvmf/common.sh@297 -- # mlx=() 00:23:19.610 17:32:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:19.610 17:32:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.610 17:32:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:19.610 17:32:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:19.610 17:32:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:19.610 17:32:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:19.610 17:32:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:19.610 17:32:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:19.610 17:32:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:19.611 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:19.611 17:32:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.611 17:32:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:19.611 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:19.611 17:32:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.611 17:32:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.611 17:32:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.611 17:32:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:19.611 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.611 17:32:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.611 17:32:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.611 17:32:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:19.611 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.611 17:32:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:19.611 17:32:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:19.611 17:32:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:19.611 17:32:39 -- nvmf/common.sh@57 -- # uname 00:23:19.611 17:32:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:19.611 17:32:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:19.611 17:32:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:19.611 17:32:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:19.611 17:32:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:19.611 17:32:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:19.611 17:32:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:19.611 17:32:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:19.611 17:32:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:19.611 17:32:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:19.611 17:32:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:19.611 17:32:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.611 17:32:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:19.611 17:32:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:19.611 17:32:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.611 17:32:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@104 -- # continue 2 00:23:19.611 17:32:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@104 -- # continue 2 00:23:19.611 17:32:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:19.611 17:32:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.611 17:32:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:19.611 17:32:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:19.611 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.611 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:19.611 altname enp217s0f0np0 00:23:19.611 altname ens818f0np0 00:23:19.611 inet 192.168.100.8/24 scope global mlx_0_0 00:23:19.611 valid_lft forever preferred_lft forever 00:23:19.611 17:32:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:19.611 17:32:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.611 17:32:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:19.611 17:32:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:19.611 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.611 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:19.611 altname enp217s0f1np1 00:23:19.611 altname ens818f1np1 00:23:19.611 inet 192.168.100.9/24 scope global mlx_0_1 00:23:19.611 valid_lft forever preferred_lft forever 00:23:19.611 17:32:39 -- nvmf/common.sh@410 -- # return 0 00:23:19.611 17:32:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:19.611 17:32:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:19.611 17:32:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:19.611 17:32:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:19.611 17:32:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.611 17:32:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:19.611 17:32:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:19.611 17:32:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.611 17:32:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:19.611 17:32:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@104 -- # continue 2 00:23:19.611 17:32:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.611 17:32:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.611 17:32:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:19.611 17:32:39 -- nvmf/common.sh@104 -- # continue 2 00:23:19.611 17:32:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:19.611 17:32:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.611 17:32:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.871 17:32:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:19.871 17:32:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:19.871 17:32:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:19.871 17:32:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.871 17:32:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:19.871 17:32:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.871 17:32:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:19.871 192.168.100.9' 00:23:19.871 17:32:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:19.871 192.168.100.9' 00:23:19.871 17:32:39 -- nvmf/common.sh@445 -- # head -n 1 00:23:19.871 17:32:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:19.871 17:32:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:19.871 192.168.100.9' 00:23:19.871 17:32:39 -- nvmf/common.sh@446 -- # tail -n +2 00:23:19.871 17:32:39 -- nvmf/common.sh@446 -- # head -n 1 00:23:19.871 17:32:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:19.871 17:32:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:19.871 17:32:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:19.871 17:32:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:19.871 17:32:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:19.871 17:32:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:19.872 17:32:39 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:19.872 17:32:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:19.872 17:32:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.872 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.872 17:32:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:19.872 17:32:39 -- nvmf/common.sh@469 -- # nvmfpid=2774852 00:23:19.872 17:32:39 -- nvmf/common.sh@470 -- # waitforlisten 2774852 00:23:19.872 17:32:39 -- common/autotest_common.sh@829 -- # '[' -z 2774852 ']' 00:23:19.872 17:32:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.872 17:32:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.872 17:32:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.872 17:32:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.872 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.872 [2024-11-09 17:32:39.463036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:19.872 [2024-11-09 17:32:39.463081] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.872 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.872 [2024-11-09 17:32:39.532611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.872 [2024-11-09 17:32:39.605107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:19.872 [2024-11-09 17:32:39.605219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.872 [2024-11-09 17:32:39.605234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.872 [2024-11-09 17:32:39.605243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.872 [2024-11-09 17:32:39.605342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.872 [2024-11-09 17:32:39.605427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.872 [2024-11-09 17:32:39.605538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.872 [2024-11-09 17:32:39.605538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.809 17:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.809 17:32:40 -- common/autotest_common.sh@862 -- # return 0 00:23:20.809 17:32:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:20.809 17:32:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:20.809 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 17:32:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.809 17:32:40 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:20.809 17:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.809 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 [2024-11-09 17:32:40.377946] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19e4380/0x19e8870) succeed. 00:23:20.809 [2024-11-09 17:32:40.387251] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19e5970/0x1a29f10) succeed. 00:23:20.809 17:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.809 17:32:40 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:20.809 17:32:40 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:20.809 17:32:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.809 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 17:32:40 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.809 17:32:40 -- target/shutdown.sh@28 -- # cat 00:23:20.809 17:32:40 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:20.809 17:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.809 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:23:21.069 Malloc1 00:23:21.069 [2024-11-09 17:32:40.609431] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:21.069 Malloc2 00:23:21.069 Malloc3 00:23:21.069 Malloc4 00:23:21.069 Malloc5 00:23:21.069 Malloc6 00:23:21.328 Malloc7 00:23:21.328 Malloc8 00:23:21.328 Malloc9 00:23:21.328 Malloc10 00:23:21.328 17:32:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.328 17:32:41 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:21.328 17:32:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.328 17:32:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.328 17:32:41 -- target/shutdown.sh@102 -- # perfpid=2775177 00:23:21.328 17:32:41 -- target/shutdown.sh@103 -- # waitforlisten 2775177 /var/tmp/bdevperf.sock 00:23:21.328 17:32:41 -- common/autotest_common.sh@829 -- # '[' -z 2775177 ']' 00:23:21.328 17:32:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.328 17:32:41 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:21.328 17:32:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.328 17:32:41 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:21.328 17:32:41 -- nvmf/common.sh@520 -- # config=() 00:23:21.328 17:32:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.328 17:32:41 -- nvmf/common.sh@520 -- # local subsystem config 00:23:21.328 17:32:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.328 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.328 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.328 { 00:23:21.328 "params": { 00:23:21.328 "name": "Nvme$subsystem", 00:23:21.328 "trtype": "$TEST_TRANSPORT", 00:23:21.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.328 "adrfam": "ipv4", 00:23:21.328 "trsvcid": "$NVMF_PORT", 00:23:21.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.328 "hdgst": ${hdgst:-false}, 00:23:21.328 "ddgst": ${ddgst:-false} 00:23:21.328 }, 00:23:21.328 "method": "bdev_nvme_attach_controller" 00:23:21.328 } 00:23:21.328 EOF 00:23:21.328 )") 00:23:21.328 17:32:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.328 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.328 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.328 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.328 { 00:23:21.328 "params": { 00:23:21.328 "name": "Nvme$subsystem", 00:23:21.328 "trtype": "$TEST_TRANSPORT", 00:23:21.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.328 "adrfam": "ipv4", 00:23:21.328 "trsvcid": "$NVMF_PORT", 00:23:21.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.328 "hdgst": ${hdgst:-false}, 00:23:21.328 "ddgst": ${ddgst:-false} 00:23:21.328 }, 00:23:21.328 "method": "bdev_nvme_attach_controller" 00:23:21.328 } 00:23:21.329 EOF 00:23:21.329 )") 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.329 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.329 { 00:23:21.329 "params": { 00:23:21.329 "name": "Nvme$subsystem", 00:23:21.329 "trtype": "$TEST_TRANSPORT", 00:23:21.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.329 "adrfam": "ipv4", 00:23:21.329 "trsvcid": "$NVMF_PORT", 00:23:21.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.329 "hdgst": ${hdgst:-false}, 00:23:21.329 "ddgst": ${ddgst:-false} 00:23:21.329 }, 00:23:21.329 "method": "bdev_nvme_attach_controller" 00:23:21.329 } 00:23:21.329 EOF 00:23:21.329 )") 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.329 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.329 { 00:23:21.329 "params": { 00:23:21.329 "name": "Nvme$subsystem", 00:23:21.329 "trtype": "$TEST_TRANSPORT", 00:23:21.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.329 "adrfam": "ipv4", 00:23:21.329 "trsvcid": "$NVMF_PORT", 00:23:21.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.329 "hdgst": ${hdgst:-false}, 00:23:21.329 "ddgst": ${ddgst:-false} 00:23:21.329 }, 00:23:21.329 "method": "bdev_nvme_attach_controller" 00:23:21.329 } 00:23:21.329 EOF 00:23:21.329 )") 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.329 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.329 { 00:23:21.329 "params": { 00:23:21.329 "name": "Nvme$subsystem", 00:23:21.329 "trtype": "$TEST_TRANSPORT", 00:23:21.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.329 "adrfam": "ipv4", 00:23:21.329 "trsvcid": "$NVMF_PORT", 00:23:21.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.329 "hdgst": ${hdgst:-false}, 00:23:21.329 "ddgst": ${ddgst:-false} 00:23:21.329 }, 00:23:21.329 "method": "bdev_nvme_attach_controller" 00:23:21.329 } 00:23:21.329 EOF 00:23:21.329 )") 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.329 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.329 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.329 { 00:23:21.329 "params": { 00:23:21.329 "name": "Nvme$subsystem", 00:23:21.329 "trtype": "$TEST_TRANSPORT", 00:23:21.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.329 "adrfam": "ipv4", 00:23:21.329 "trsvcid": "$NVMF_PORT", 00:23:21.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.329 "hdgst": ${hdgst:-false}, 00:23:21.329 "ddgst": ${ddgst:-false} 00:23:21.329 }, 00:23:21.329 "method": "bdev_nvme_attach_controller" 00:23:21.329 } 00:23:21.329 EOF 00:23:21.329 )") 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.589 [2024-11-09 17:32:41.098331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.589 [2024-11-09 17:32:41.098383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775177 ] 00:23:21.589 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.589 { 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme$subsystem", 00:23:21.589 "trtype": "$TEST_TRANSPORT", 00:23:21.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "$NVMF_PORT", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.589 "hdgst": ${hdgst:-false}, 00:23:21.589 "ddgst": ${ddgst:-false} 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 } 00:23:21.589 EOF 00:23:21.589 )") 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.589 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.589 { 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme$subsystem", 00:23:21.589 "trtype": "$TEST_TRANSPORT", 00:23:21.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "$NVMF_PORT", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.589 "hdgst": ${hdgst:-false}, 00:23:21.589 "ddgst": ${ddgst:-false} 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 } 00:23:21.589 EOF 00:23:21.589 )") 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.589 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.589 { 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme$subsystem", 00:23:21.589 "trtype": "$TEST_TRANSPORT", 00:23:21.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "$NVMF_PORT", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.589 "hdgst": ${hdgst:-false}, 00:23:21.589 "ddgst": ${ddgst:-false} 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 } 00:23:21.589 EOF 00:23:21.589 )") 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.589 17:32:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.589 { 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme$subsystem", 00:23:21.589 "trtype": "$TEST_TRANSPORT", 00:23:21.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "$NVMF_PORT", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.589 "hdgst": ${hdgst:-false}, 00:23:21.589 "ddgst": ${ddgst:-false} 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 } 00:23:21.589 EOF 00:23:21.589 )") 00:23:21.589 17:32:41 -- nvmf/common.sh@542 -- # cat 00:23:21.589 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.589 17:32:41 -- nvmf/common.sh@544 -- # jq . 00:23:21.589 17:32:41 -- nvmf/common.sh@545 -- # IFS=, 00:23:21.589 17:32:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme1", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme2", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme3", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme4", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme5", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme6", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme7", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme8", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme9", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.589 },{ 00:23:21.589 "params": { 00:23:21.589 "name": "Nvme10", 00:23:21.589 "trtype": "rdma", 00:23:21.589 "traddr": "192.168.100.8", 00:23:21.589 "adrfam": "ipv4", 00:23:21.589 "trsvcid": "4420", 00:23:21.589 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:21.589 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:21.589 "hdgst": false, 00:23:21.589 "ddgst": false 00:23:21.589 }, 00:23:21.589 "method": "bdev_nvme_attach_controller" 00:23:21.590 }' 00:23:21.590 [2024-11-09 17:32:41.170745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.590 [2024-11-09 17:32:41.237620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.528 Running I/O for 10 seconds... 00:23:23.096 17:32:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.096 17:32:42 -- common/autotest_common.sh@862 -- # return 0 00:23:23.096 17:32:42 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:23.096 17:32:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.096 17:32:42 -- common/autotest_common.sh@10 -- # set +x 00:23:23.096 17:32:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.096 17:32:42 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:23.096 17:32:42 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:23.096 17:32:42 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:23.096 17:32:42 -- target/shutdown.sh@57 -- # local ret=1 00:23:23.096 17:32:42 -- target/shutdown.sh@58 -- # local i 00:23:23.096 17:32:42 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:23.096 17:32:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:23.096 17:32:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:23.096 17:32:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:23.096 17:32:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.096 17:32:42 -- common/autotest_common.sh@10 -- # set +x 00:23:23.356 17:32:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.356 17:32:42 -- target/shutdown.sh@60 -- # read_io_count=468 00:23:23.356 17:32:42 -- target/shutdown.sh@63 -- # '[' 468 -ge 100 ']' 00:23:23.356 17:32:42 -- target/shutdown.sh@64 -- # ret=0 00:23:23.356 17:32:42 -- target/shutdown.sh@65 -- # break 00:23:23.356 17:32:42 -- target/shutdown.sh@69 -- # return 0 00:23:23.356 17:32:42 -- target/shutdown.sh@109 -- # killprocess 2775177 00:23:23.356 17:32:42 -- common/autotest_common.sh@936 -- # '[' -z 2775177 ']' 00:23:23.356 17:32:42 -- common/autotest_common.sh@940 -- # kill -0 2775177 00:23:23.356 17:32:42 -- common/autotest_common.sh@941 -- # uname 00:23:23.356 17:32:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.356 17:32:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2775177 00:23:23.356 17:32:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.356 17:32:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.356 17:32:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2775177' 00:23:23.356 killing process with pid 2775177 00:23:23.356 17:32:42 -- common/autotest_common.sh@955 -- # kill 2775177 00:23:23.356 17:32:42 -- common/autotest_common.sh@960 -- # wait 2775177 00:23:23.356 Received shutdown signal, test time was about 0.937096 seconds 00:23:23.356 00:23:23.356 Latency(us) 00:23:23.356 [2024-11-09T16:32:43.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme1n1 : 0.93 713.61 44.60 0.00 0.00 88622.81 7602.18 115762.79 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme2n1 : 0.93 718.17 44.89 0.00 0.00 87355.46 7916.75 111568.49 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme3n1 : 0.93 738.97 46.19 0.00 0.00 84170.36 8074.04 104857.60 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme4n1 : 0.93 747.91 46.74 0.00 0.00 82564.93 8178.89 73819.75 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme5n1 : 0.93 742.86 46.43 0.00 0.00 82581.50 8231.32 71722.60 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme6n1 : 0.93 742.11 46.38 0.00 0.00 82074.48 8388.61 70464.31 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme7n1 : 0.93 741.34 46.33 0.00 0.00 81554.13 8493.47 70044.88 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme8n1 : 0.93 740.59 46.29 0.00 0.00 81052.35 8650.75 71722.60 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme9n1 : 0.94 739.84 46.24 0.00 0.00 80537.00 8755.61 72980.89 00:23:23.356 [2024-11-09T16:32:43.126Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.356 Verification LBA range: start 0x0 length 0x400 00:23:23.356 Nvme10n1 : 0.94 515.85 32.24 0.00 0.00 114661.19 7759.46 325477.99 00:23:23.356 [2024-11-09T16:32:43.126Z] =================================================================================================================== 00:23:23.356 [2024-11-09T16:32:43.126Z] Total : 7141.25 446.33 0.00 0.00 85620.32 7602.18 325477.99 00:23:23.616 17:32:43 -- target/shutdown.sh@112 -- # sleep 1 00:23:24.995 17:32:44 -- target/shutdown.sh@113 -- # kill -0 2774852 00:23:24.995 17:32:44 -- target/shutdown.sh@115 -- # stoptarget 00:23:24.995 17:32:44 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:24.995 17:32:44 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:24.995 17:32:44 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.995 17:32:44 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:24.995 17:32:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.995 17:32:44 -- nvmf/common.sh@116 -- # sync 00:23:24.995 17:32:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:24.995 17:32:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:24.995 17:32:44 -- nvmf/common.sh@119 -- # set +e 00:23:24.995 17:32:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.995 17:32:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:24.995 rmmod nvme_rdma 00:23:24.995 rmmod nvme_fabrics 00:23:24.995 17:32:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.995 17:32:44 -- nvmf/common.sh@123 -- # set -e 00:23:24.995 17:32:44 -- nvmf/common.sh@124 -- # return 0 00:23:24.995 17:32:44 -- nvmf/common.sh@477 -- # '[' -n 2774852 ']' 00:23:24.995 17:32:44 -- nvmf/common.sh@478 -- # killprocess 2774852 00:23:24.995 17:32:44 -- common/autotest_common.sh@936 -- # '[' -z 2774852 ']' 00:23:24.995 17:32:44 -- common/autotest_common.sh@940 -- # kill -0 2774852 00:23:24.995 17:32:44 -- common/autotest_common.sh@941 -- # uname 00:23:24.995 17:32:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:24.995 17:32:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2774852 00:23:24.995 17:32:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:24.995 17:32:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:24.995 17:32:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2774852' 00:23:24.995 killing process with pid 2774852 00:23:24.995 17:32:44 -- common/autotest_common.sh@955 -- # kill 2774852 00:23:24.995 17:32:44 -- common/autotest_common.sh@960 -- # wait 2774852 00:23:25.255 17:32:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:25.255 17:32:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:25.255 00:23:25.255 real 0m5.779s 00:23:25.255 user 0m23.451s 00:23:25.255 sys 0m1.183s 00:23:25.255 17:32:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:25.255 17:32:44 -- common/autotest_common.sh@10 -- # set +x 00:23:25.255 ************************************ 00:23:25.255 END TEST nvmf_shutdown_tc2 00:23:25.255 ************************************ 00:23:25.255 17:32:44 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:25.255 17:32:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:25.255 17:32:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:25.255 17:32:44 -- common/autotest_common.sh@10 -- # set +x 00:23:25.255 ************************************ 00:23:25.255 START TEST nvmf_shutdown_tc3 00:23:25.255 ************************************ 00:23:25.255 17:32:45 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:23:25.255 17:32:45 -- target/shutdown.sh@120 -- # starttarget 00:23:25.255 17:32:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:25.255 17:32:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:25.255 17:32:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.255 17:32:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.255 17:32:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.255 17:32:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.255 17:32:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.255 17:32:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.255 17:32:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.255 17:32:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:25.255 17:32:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:25.255 17:32:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.255 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:23:25.255 17:32:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.255 17:32:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:25.255 17:32:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:25.255 17:32:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:25.255 17:32:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:25.255 17:32:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:25.255 17:32:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:25.255 17:32:45 -- nvmf/common.sh@294 -- # net_devs=() 00:23:25.255 17:32:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:25.255 17:32:45 -- nvmf/common.sh@295 -- # e810=() 00:23:25.255 17:32:45 -- nvmf/common.sh@295 -- # local -ga e810 00:23:25.255 17:32:45 -- nvmf/common.sh@296 -- # x722=() 00:23:25.515 17:32:45 -- nvmf/common.sh@296 -- # local -ga x722 00:23:25.515 17:32:45 -- nvmf/common.sh@297 -- # mlx=() 00:23:25.515 17:32:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:25.515 17:32:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.515 17:32:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:25.515 17:32:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:25.515 17:32:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:25.515 17:32:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:25.515 17:32:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:25.515 17:32:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:25.515 17:32:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:25.515 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:25.516 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:25.516 17:32:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.516 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:25.516 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:25.516 17:32:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.516 17:32:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.516 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.516 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:25.516 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.516 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.516 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.516 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:25.516 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.516 17:32:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:25.516 17:32:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:25.516 17:32:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:25.516 17:32:45 -- nvmf/common.sh@57 -- # uname 00:23:25.516 17:32:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:25.516 17:32:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:25.516 17:32:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:25.516 17:32:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:25.516 17:32:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:25.516 17:32:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:25.516 17:32:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:25.516 17:32:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:25.516 17:32:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:25.516 17:32:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:25.516 17:32:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:25.516 17:32:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.516 17:32:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:25.516 17:32:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:25.516 17:32:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.516 17:32:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:23:25.516 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:23:25.516 17:32:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:25.516 17:32:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.516 17:32:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:25.516 17:32:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:25.516 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.516 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:25.516 altname enp217s0f0np0 00:23:25.516 altname ens818f0np0 00:23:25.516 inet 192.168.100.8/24 scope global mlx_0_0 00:23:25.516 valid_lft forever preferred_lft forever 00:23:25.516 17:32:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:25.516 17:32:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.516 17:32:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:25.516 17:32:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:25.516 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.516 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:25.516 altname enp217s0f1np1 00:23:25.516 altname ens818f1np1 00:23:25.516 inet 192.168.100.9/24 scope global mlx_0_1 00:23:25.516 valid_lft forever preferred_lft forever 00:23:25.516 17:32:45 -- nvmf/common.sh@410 -- # return 0 00:23:25.516 17:32:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:25.516 17:32:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:25.516 17:32:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:25.516 17:32:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:25.516 17:32:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.516 17:32:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:25.516 17:32:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:25.516 17:32:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.516 17:32:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:25.516 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:23:25.516 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.516 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.516 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:23:25.516 17:32:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.516 17:32:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.516 17:32:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.516 17:32:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.516 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.516 17:32:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:25.516 192.168.100.9' 00:23:25.516 17:32:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:25.516 192.168.100.9' 00:23:25.516 17:32:45 -- nvmf/common.sh@445 -- # head -n 1 00:23:25.516 17:32:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:25.516 17:32:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:25.516 192.168.100.9' 00:23:25.516 17:32:45 -- nvmf/common.sh@446 -- # tail -n +2 00:23:25.516 17:32:45 -- nvmf/common.sh@446 -- # head -n 1 00:23:25.516 17:32:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:25.516 17:32:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:25.516 17:32:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:25.516 17:32:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:25.516 17:32:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:25.516 17:32:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:25.516 17:32:45 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.516 17:32:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.516 17:32:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.517 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:23:25.776 17:32:45 -- nvmf/common.sh@469 -- # nvmfpid=2775926 00:23:25.776 17:32:45 -- nvmf/common.sh@470 -- # waitforlisten 2775926 00:23:25.776 17:32:45 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.776 17:32:45 -- common/autotest_common.sh@829 -- # '[' -z 2775926 ']' 00:23:25.776 17:32:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.776 17:32:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.776 17:32:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.776 17:32:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.776 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:23:25.776 [2024-11-09 17:32:45.336207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:25.776 [2024-11-09 17:32:45.336264] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.776 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.776 [2024-11-09 17:32:45.403188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.776 [2024-11-09 17:32:45.472076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.776 [2024-11-09 17:32:45.472191] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.776 [2024-11-09 17:32:45.472201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.776 [2024-11-09 17:32:45.472210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.776 [2024-11-09 17:32:45.472311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.776 [2024-11-09 17:32:45.472397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.776 [2024-11-09 17:32:45.472506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.776 [2024-11-09 17:32:45.472507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.714 17:32:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.714 17:32:46 -- common/autotest_common.sh@862 -- # return 0 00:23:26.714 17:32:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:26.714 17:32:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.714 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.714 17:32:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.714 17:32:46 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.714 17:32:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.714 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.714 [2024-11-09 17:32:46.233653] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12ee380/0x12f2870) succeed. 00:23:26.714 [2024-11-09 17:32:46.242843] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12ef970/0x1333f10) succeed. 00:23:26.714 17:32:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.714 17:32:46 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:26.715 17:32:46 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:26.715 17:32:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.715 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.715 17:32:46 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.715 17:32:46 -- target/shutdown.sh@28 -- # cat 00:23:26.715 17:32:46 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:26.715 17:32:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.715 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.715 Malloc1 00:23:26.715 [2024-11-09 17:32:46.464433] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:26.974 Malloc2 00:23:26.974 Malloc3 00:23:26.974 Malloc4 00:23:26.974 Malloc5 00:23:26.974 Malloc6 00:23:26.974 Malloc7 00:23:27.234 Malloc8 00:23:27.234 Malloc9 00:23:27.234 Malloc10 00:23:27.234 17:32:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.234 17:32:46 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:27.234 17:32:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.234 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:27.234 17:32:46 -- target/shutdown.sh@124 -- # perfpid=2776248 00:23:27.234 17:32:46 -- target/shutdown.sh@125 -- # waitforlisten 2776248 /var/tmp/bdevperf.sock 00:23:27.234 17:32:46 -- common/autotest_common.sh@829 -- # '[' -z 2776248 ']' 00:23:27.234 17:32:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.234 17:32:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.234 17:32:46 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:27.234 17:32:46 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:27.234 17:32:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.234 17:32:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.234 17:32:46 -- nvmf/common.sh@520 -- # config=() 00:23:27.234 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:23:27.234 17:32:46 -- nvmf/common.sh@520 -- # local subsystem config 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 [2024-11-09 17:32:46.953666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.234 [2024-11-09 17:32:46.953715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776248 ] 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.234 "name": "Nvme$subsystem", 00:23:27.234 "trtype": "$TEST_TRANSPORT", 00:23:27.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.234 "adrfam": "ipv4", 00:23:27.234 "trsvcid": "$NVMF_PORT", 00:23:27.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.234 "hdgst": ${hdgst:-false}, 00:23:27.234 "ddgst": ${ddgst:-false} 00:23:27.234 }, 00:23:27.234 "method": "bdev_nvme_attach_controller" 00:23:27.234 } 00:23:27.234 EOF 00:23:27.234 )") 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.234 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.234 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.234 { 00:23:27.234 "params": { 00:23:27.235 "name": "Nvme$subsystem", 00:23:27.235 "trtype": "$TEST_TRANSPORT", 00:23:27.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "$NVMF_PORT", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.235 "hdgst": ${hdgst:-false}, 00:23:27.235 "ddgst": ${ddgst:-false} 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 } 00:23:27.235 EOF 00:23:27.235 )") 00:23:27.235 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.235 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.235 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.235 { 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme$subsystem", 00:23:27.235 "trtype": "$TEST_TRANSPORT", 00:23:27.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "$NVMF_PORT", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.235 "hdgst": ${hdgst:-false}, 00:23:27.235 "ddgst": ${ddgst:-false} 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 } 00:23:27.235 EOF 00:23:27.235 )") 00:23:27.235 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.235 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.235 17:32:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.235 17:32:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.235 { 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme$subsystem", 00:23:27.235 "trtype": "$TEST_TRANSPORT", 00:23:27.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "$NVMF_PORT", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.235 "hdgst": ${hdgst:-false}, 00:23:27.235 "ddgst": ${ddgst:-false} 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 } 00:23:27.235 EOF 00:23:27.235 )") 00:23:27.235 17:32:46 -- nvmf/common.sh@542 -- # cat 00:23:27.235 17:32:46 -- nvmf/common.sh@544 -- # jq . 00:23:27.235 17:32:46 -- nvmf/common.sh@545 -- # IFS=, 00:23:27.235 17:32:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme1", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme2", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme3", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme4", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme5", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme6", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme7", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme8", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme9", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 },{ 00:23:27.235 "params": { 00:23:27.235 "name": "Nvme10", 00:23:27.235 "trtype": "rdma", 00:23:27.235 "traddr": "192.168.100.8", 00:23:27.235 "adrfam": "ipv4", 00:23:27.235 "trsvcid": "4420", 00:23:27.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:27.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:27.235 "hdgst": false, 00:23:27.235 "ddgst": false 00:23:27.235 }, 00:23:27.235 "method": "bdev_nvme_attach_controller" 00:23:27.235 }' 00:23:27.495 [2024-11-09 17:32:47.024613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.495 [2024-11-09 17:32:47.091688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.433 Running I/O for 10 seconds... 00:23:29.007 17:32:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.007 17:32:48 -- common/autotest_common.sh@862 -- # return 0 00:23:29.007 17:32:48 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:29.007 17:32:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.007 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:23:29.007 17:32:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.007 17:32:48 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.007 17:32:48 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:29.007 17:32:48 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:29.007 17:32:48 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:29.007 17:32:48 -- target/shutdown.sh@57 -- # local ret=1 00:23:29.007 17:32:48 -- target/shutdown.sh@58 -- # local i 00:23:29.007 17:32:48 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:29.007 17:32:48 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:29.007 17:32:48 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:29.007 17:32:48 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:29.007 17:32:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.007 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:23:29.007 17:32:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.007 17:32:48 -- target/shutdown.sh@60 -- # read_io_count=491 00:23:29.007 17:32:48 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:23:29.007 17:32:48 -- target/shutdown.sh@64 -- # ret=0 00:23:29.007 17:32:48 -- target/shutdown.sh@65 -- # break 00:23:29.007 17:32:48 -- target/shutdown.sh@69 -- # return 0 00:23:29.007 17:32:48 -- target/shutdown.sh@134 -- # killprocess 2775926 00:23:29.007 17:32:48 -- common/autotest_common.sh@936 -- # '[' -z 2775926 ']' 00:23:29.007 17:32:48 -- common/autotest_common.sh@940 -- # kill -0 2775926 00:23:29.007 17:32:48 -- common/autotest_common.sh@941 -- # uname 00:23:29.366 17:32:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:29.366 17:32:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2775926 00:23:29.366 17:32:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:29.366 17:32:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:29.366 17:32:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2775926' 00:23:29.366 killing process with pid 2775926 00:23:29.366 17:32:48 -- common/autotest_common.sh@955 -- # kill 2775926 00:23:29.366 17:32:48 -- common/autotest_common.sh@960 -- # wait 2775926 00:23:29.650 17:32:49 -- target/shutdown.sh@135 -- # nvmfpid= 00:23:29.650 17:32:49 -- target/shutdown.sh@138 -- # sleep 1 00:23:30.231 [2024-11-09 17:32:49.891819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:8e1185d0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.891870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.891883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:8e1185d0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.891893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:8e1185d0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.891909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.891917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:8e1185d0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.894659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.231 [2024-11-09 17:32:49.894710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.231 [2024-11-09 17:32:49.894778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.894814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.894846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.894876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.231 [2024-11-09 17:32:49.894886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.231 [2024-11-09 17:32:49.894894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.894903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.894911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.896596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.896638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:30.232 [2024-11-09 17:32:49.896688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.896720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.896754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.896784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.896817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.896847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.896879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.896909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.899276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.899311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:30.232 [2024-11-09 17:32:49.899329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.899349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.899368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.899376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.899385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.899393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.901352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.901393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:30.232 [2024-11-09 17:32:49.901441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.901486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.901520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.901550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.901582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.901613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.901645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.901676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.904066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.904106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:30.232 [2024-11-09 17:32:49.904153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.904186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.904219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.904255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.904272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.904284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.904297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.904310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.906557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.906598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:30.232 [2024-11-09 17:32:49.906646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.906679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.906712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.906742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.906774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.906805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.906837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.232 [2024-11-09 17:32:49.906867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.232 [2024-11-09 17:32:49.909040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.232 [2024-11-09 17:32:49.909057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:30.232 [2024-11-09 17:32:49.909079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.909092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.909106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.909118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.909131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.909144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.909158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.909170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.911327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.233 [2024-11-09 17:32:49.911374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:30.233 [2024-11-09 17:32:49.911422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.911501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.911532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.911563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.911593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.911625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.911655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.913735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.233 [2024-11-09 17:32:49.913776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:30.233 [2024-11-09 17:32:49.913824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.913856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.913889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.913919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.913950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.913981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.914013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.233 [2024-11-09 17:32:49.914043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:25356 cdw0:8e1185d0 sqhd:4d00 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.233 [2024-11-09 17:32:49.916377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:30.233 [2024-11-09 17:32:49.916430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x183e00 00:23:30.233 [2024-11-09 17:32:49.916469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x181a00 00:23:30.233 [2024-11-09 17:32:49.916521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x183e00 00:23:30.233 [2024-11-09 17:32:49.916555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x181a00 00:23:30.233 [2024-11-09 17:32:49.916596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x180d00 00:23:30.233 [2024-11-09 17:32:49.916628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x181d00 00:23:30.233 [2024-11-09 17:32:49.916658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x181a00 00:23:30.233 [2024-11-09 17:32:49.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x181d00 00:23:30.233 [2024-11-09 17:32:49.916719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x181a00 00:23:30.233 [2024-11-09 17:32:49.916750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x181d00 00:23:30.233 [2024-11-09 17:32:49.916780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x181d00 00:23:30.233 [2024-11-09 17:32:49.916810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x181d00 00:23:30.233 [2024-11-09 17:32:49.916841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.233 [2024-11-09 17:32:49.916858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019243e80 len:0x10000 key:0x182900 00:23:30.233 [2024-11-09 17:32:49.916871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x181a00 00:23:30.234 [2024-11-09 17:32:49.916904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.916921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x181d00 00:23:30.234 [2024-11-09 17:32:49.916934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e9ee40 len:0x10000 key:0x183400 00:23:30.234 [2024-11-09 17:32:49.916965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.916982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001920fcc0 len:0x10000 key:0x182900 00:23:30.234 [2024-11-09 17:32:49.916995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x181a00 00:23:30.234 [2024-11-09 17:32:49.917026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x183e00 00:23:30.234 [2024-11-09 17:32:49.917059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003edf040 len:0x10000 key:0x183400 00:23:30.234 [2024-11-09 17:32:49.917091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x181d00 00:23:30.234 [2024-11-09 17:32:49.917121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e7ed40 len:0x10000 key:0x183400 00:23:30.234 [2024-11-09 17:32:49.917151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x181a00 00:23:30.234 [2024-11-09 17:32:49.917181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x180d00 00:23:30.234 [2024-11-09 17:32:49.917213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x180d00 00:23:30.234 [2024-11-09 17:32:49.917245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001921fd40 len:0x10000 key:0x182900 00:23:30.234 [2024-11-09 17:32:49.917275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x180d00 00:23:30.234 [2024-11-09 17:32:49.917305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x181d00 00:23:30.234 [2024-11-09 17:32:49.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x183e00 00:23:30.234 [2024-11-09 17:32:49.917366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x181a00 00:23:30.234 [2024-11-09 17:32:49.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x183e00 00:23:30.234 [2024-11-09 17:32:49.917427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019270e80 len:0x10000 key:0x182900 00:23:30.234 [2024-11-09 17:32:49.917463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x180d00 00:23:30.234 [2024-11-09 17:32:49.917494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x180d00 00:23:30.234 [2024-11-09 17:32:49.917524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x181a00 00:23:30.234 [2024-11-09 17:32:49.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e8edc0 len:0x10000 key:0x183400 00:23:30.234 [2024-11-09 17:32:49.917588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019260e00 len:0x10000 key:0x182900 00:23:30.234 [2024-11-09 17:32:49.917621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x181d00 00:23:30.234 [2024-11-09 17:32:49.917651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.234 [2024-11-09 17:32:49.917669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x181d00 00:23:30.235 [2024-11-09 17:32:49.917682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x181d00 00:23:30.235 [2024-11-09 17:32:49.917713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x180d00 00:23:30.235 [2024-11-09 17:32:49.917744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x180d00 00:23:30.235 [2024-11-09 17:32:49.917775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x180d00 00:23:30.235 [2024-11-09 17:32:49.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x181d00 00:23:30.235 [2024-11-09 17:32:49.917836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x180d00 00:23:30.235 [2024-11-09 17:32:49.917867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x180d00 00:23:30.235 [2024-11-09 17:32:49.917897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.917928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.917964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.917983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.917996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184300 00:23:30.235 [2024-11-09 17:32:49.918439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.235 [2024-11-09 17:32:49.918481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.918495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.921728] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:23:30.236 [2024-11-09 17:32:49.921774] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.236 [2024-11-09 17:32:49.921819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183400 00:23:30.236 [2024-11-09 17:32:49.921852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.921900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.921935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.921979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183d00 00:23:30.236 [2024-11-09 17:32:49.922086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183d00 00:23:30.236 [2024-11-09 17:32:49.922553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183d00 00:23:30.236 [2024-11-09 17:32:49.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183d00 00:23:30.236 [2024-11-09 17:32:49.922615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002171c0 len:0x10000 key:0x183d00 00:23:30.236 [2024-11-09 17:32:49.922742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b21f700 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x184300 00:23:30.236 [2024-11-09 17:32:49.922802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x183b00 00:23:30.236 [2024-11-09 17:32:49.922863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.236 [2024-11-09 17:32:49.922880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3eb40 len:0x10000 key:0x183400 00:23:30.236 [2024-11-09 17:32:49.922894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.922911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000715fb00 len:0x10000 key:0x183b00 00:23:30.237 [2024-11-09 17:32:49.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.922941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x183400 00:23:30.237 [2024-11-09 17:32:49.922954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.922971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000705f300 len:0x10000 key:0x183b00 00:23:30.237 [2024-11-09 17:32:49.922984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x183d00 00:23:30.237 [2024-11-09 17:32:49.923016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x183d00 00:23:30.237 [2024-11-09 17:32:49.923046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x183b00 00:23:30.237 [2024-11-09 17:32:49.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183b00 00:23:30.237 [2024-11-09 17:32:49.923106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e5ec40 len:0x10000 key:0x183400 00:23:30.237 [2024-11-09 17:32:49.923167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000711f900 len:0x10000 key:0x183b00 00:23:30.237 [2024-11-09 17:32:49.923197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183d00 00:23:30.237 [2024-11-09 17:32:49.923258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.237 [2024-11-09 17:32:49.923596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:23:30.237 [2024-11-09 17:32:49.923608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.923973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f096000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.923987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.924005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0b7000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.924018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.924035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.924048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.924068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.924081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.924100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.924113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.924131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184300 00:23:30.238 [2024-11-09 17:32:49.924144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.927512] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:23:30.238 [2024-11-09 17:32:49.927559] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.238 [2024-11-09 17:32:49.927603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183200 00:23:30.238 [2024-11-09 17:32:49.927637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.927707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183200 00:23:30.238 [2024-11-09 17:32:49.927741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.927784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183200 00:23:30.238 [2024-11-09 17:32:49.927817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.927861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183200 00:23:30.238 [2024-11-09 17:32:49.927893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.927935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:23:30.238 [2024-11-09 17:32:49.927967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.928009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183c00 00:23:30.238 [2024-11-09 17:32:49.928041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.928084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x183c00 00:23:30.238 [2024-11-09 17:32:49.928116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.238 [2024-11-09 17:32:49.928158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x183c00 00:23:30.239 [2024-11-09 17:32:49.928346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x183c00 00:23:30.239 [2024-11-09 17:32:49.928639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183c00 00:23:30.239 [2024-11-09 17:32:49.928703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183c00 00:23:30.239 [2024-11-09 17:32:49.928734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.928917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183200 00:23:30.239 [2024-11-09 17:32:49.928978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.928996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.929009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.929029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:23:30.239 [2024-11-09 17:32:49.929042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.239 [2024-11-09 17:32:49.929059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x183c00 00:23:30.240 [2024-11-09 17:32:49.929074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183c00 00:23:30.240 [2024-11-09 17:32:49.929105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182a00 00:23:30.240 [2024-11-09 17:32:49.929135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183c00 00:23:30.240 [2024-11-09 17:32:49.929165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183200 00:23:30.240 [2024-11-09 17:32:49.929196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184300 00:23:30.240 [2024-11-09 17:32:49.929863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.240 [2024-11-09 17:32:49.929880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.929894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.929912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.929926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.929944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.929957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.929975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.929988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.930006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.930020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.930038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.930051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.930069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x184300 00:23:30.241 [2024-11-09 17:32:49.930082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933019] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:23:30.241 [2024-11-09 17:32:49.933064] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.241 [2024-11-09 17:32:49.933107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:23:30.241 [2024-11-09 17:32:49.933297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x182a00 00:23:30.241 [2024-11-09 17:32:49.933372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:23:30.241 [2024-11-09 17:32:49.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:23:30.241 [2024-11-09 17:32:49.933623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:23:30.241 [2024-11-09 17:32:49.933719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x182a00 00:23:30.241 [2024-11-09 17:32:49.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182c00 00:23:30.241 [2024-11-09 17:32:49.933874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x182b00 00:23:30.241 [2024-11-09 17:32:49.933905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.241 [2024-11-09 17:32:49.933922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:23:30.242 [2024-11-09 17:32:49.933935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.933953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x182a00 00:23:30.242 [2024-11-09 17:32:49.933966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.933984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:23:30.242 [2024-11-09 17:32:49.934001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:23:30.242 [2024-11-09 17:32:49.934033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:23:30.242 [2024-11-09 17:32:49.934064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:23:30.242 [2024-11-09 17:32:49.934095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:23:30.242 [2024-11-09 17:32:49.934127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x182b00 00:23:30.242 [2024-11-09 17:32:49.934158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:23:30.242 [2024-11-09 17:32:49.934188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:23:30.242 [2024-11-09 17:32:49.934219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:23:30.242 [2024-11-09 17:32:49.934249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:23:30.242 [2024-11-09 17:32:49.934282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:23:30.242 [2024-11-09 17:32:49.934313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:23:30.242 [2024-11-09 17:32:49.934345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:23:30.242 [2024-11-09 17:32:49.934376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012237000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012258000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012279000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010890000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108b1000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed9f000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.242 [2024-11-09 17:32:49.934766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184300 00:23:30.242 [2024-11-09 17:32:49.934780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.934971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.934989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d55000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d34000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:23:30.243 [2024-11-09 17:32:49.935348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938272] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:23:30.243 [2024-11-09 17:32:49.938316] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.243 [2024-11-09 17:32:49.938355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:23:30.243 [2024-11-09 17:32:49.938369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:23:30.243 [2024-11-09 17:32:49.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:23:30.243 [2024-11-09 17:32:49.938437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:23:30.243 [2024-11-09 17:32:49.938488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:23:30.243 [2024-11-09 17:32:49.938519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:23:30.243 [2024-11-09 17:32:49.938550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:23:30.243 [2024-11-09 17:32:49.938581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.243 [2024-11-09 17:32:49.938599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.938611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.938642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:23:30.244 [2024-11-09 17:32:49.938673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.938704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.938734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.938890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:23:30.244 [2024-11-09 17:32:49.938921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.938982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.938999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:23:30.244 [2024-11-09 17:32:49.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.939043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.939104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.939136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.939166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:23:30.244 [2024-11-09 17:32:49.939197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.939227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:23:30.244 [2024-11-09 17:32:49.939259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.244 [2024-11-09 17:32:49.939277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:23:30.245 [2024-11-09 17:32:49.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:23:30.245 [2024-11-09 17:32:49.939321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:23:30.245 [2024-11-09 17:32:49.939351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:23:30.245 [2024-11-09 17:32:49.939382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:23:30.245 [2024-11-09 17:32:49.939412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:23:30.245 [2024-11-09 17:32:49.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:23:30.245 [2024-11-09 17:32:49.939481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.939973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.939986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.940004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.940017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.940034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.940048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.940066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.940079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.940097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.940111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.245 [2024-11-09 17:32:49.940129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x184300 00:23:30.245 [2024-11-09 17:32:49.940142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x184300 00:23:30.246 [2024-11-09 17:32:49.940381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943421] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:23:30.246 [2024-11-09 17:32:49.943476] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.246 [2024-11-09 17:32:49.943520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:23:30.246 [2024-11-09 17:32:49.943553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.943649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.943725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.943799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:23:30.246 [2024-11-09 17:32:49.943880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:23:30.246 [2024-11-09 17:32:49.943955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.943999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:23:30.246 [2024-11-09 17:32:49.944031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:23:30.246 [2024-11-09 17:32:49.944106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:23:30.246 [2024-11-09 17:32:49.944255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183100 00:23:30.246 [2024-11-09 17:32:49.944330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:23:30.246 [2024-11-09 17:32:49.944406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:23:30.246 [2024-11-09 17:32:49.944480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:23:30.246 [2024-11-09 17:32:49.944632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.246 [2024-11-09 17:32:49.944649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:23:30.246 [2024-11-09 17:32:49.944662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:23:30.247 [2024-11-09 17:32:49.944691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:23:30.247 [2024-11-09 17:32:49.944721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.944750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183100 00:23:30.247 [2024-11-09 17:32:49.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:23:30.247 [2024-11-09 17:32:49.944809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.944839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:23:30.247 [2024-11-09 17:32:49.944868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:23:30.247 [2024-11-09 17:32:49.944898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:23:30.247 [2024-11-09 17:32:49.944929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.944958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.944975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.944988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.945018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:23:30.247 [2024-11-09 17:32:49.945047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:23:30.247 [2024-11-09 17:32:49.945078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.945108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:23:30.247 [2024-11-09 17:32:49.945138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183300 00:23:30.247 [2024-11-09 17:32:49.945168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001358d000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.247 [2024-11-09 17:32:49.945428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x184300 00:23:30.247 [2024-11-09 17:32:49.945441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e4000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c3000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.945975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.945992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.946005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.946022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x184300 00:23:30.248 [2024-11-09 17:32:49.946035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.948818] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:23:30.248 [2024-11-09 17:32:49.948864] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.248 [2024-11-09 17:32:49.948906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183700 00:23:30.248 [2024-11-09 17:32:49.948941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.948987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183700 00:23:30.248 [2024-11-09 17:32:49.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183700 00:23:30.248 [2024-11-09 17:32:49.949331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183100 00:23:30.248 [2024-11-09 17:32:49.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183700 00:23:30.248 [2024-11-09 17:32:49.949602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x184200 00:23:30.248 [2024-11-09 17:32:49.949663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.248 [2024-11-09 17:32:49.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.949812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183100 00:23:30.249 [2024-11-09 17:32:49.949842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.949905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:23:30.249 [2024-11-09 17:32:49.949934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.949964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.949981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.949995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.950054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.950114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.950206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.950236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x184200 00:23:30.249 [2024-11-09 17:32:49.950324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183700 00:23:30.249 [2024-11-09 17:32:49.950384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001373a000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184300 00:23:30.249 [2024-11-09 17:32:49.950589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.249 [2024-11-09 17:32:49.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.950984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.950997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.951270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x184300 00:23:30.250 [2024-11-09 17:32:49.951285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954085] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:23:30.250 [2024-11-09 17:32:49.954130] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.250 [2024-11-09 17:32:49.954174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x184200 00:23:30.250 [2024-11-09 17:32:49.954208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183500 00:23:30.250 [2024-11-09 17:32:49.954497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183f00 00:23:30.250 [2024-11-09 17:32:49.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183f00 00:23:30.250 [2024-11-09 17:32:49.954556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183500 00:23:30.250 [2024-11-09 17:32:49.954585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183f00 00:23:30.250 [2024-11-09 17:32:49.954646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183500 00:23:30.250 [2024-11-09 17:32:49.954676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183500 00:23:30.250 [2024-11-09 17:32:49.954705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.250 [2024-11-09 17:32:49.954750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183600 00:23:30.250 [2024-11-09 17:32:49.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.954792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.954821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.954850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183f00 00:23:30.251 [2024-11-09 17:32:49.954879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.954908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183f00 00:23:30.251 [2024-11-09 17:32:49.954938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183f00 00:23:30.251 [2024-11-09 17:32:49.954966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.954984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.955025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x184200 00:23:30.251 [2024-11-09 17:32:49.955055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.955142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.955228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183500 00:23:30.251 [2024-11-09 17:32:49.955257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183f00 00:23:30.251 [2024-11-09 17:32:49.955285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x184200 00:23:30.251 [2024-11-09 17:32:49.955316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183f00 00:23:30.251 [2024-11-09 17:32:49.955374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183600 00:23:30.251 [2024-11-09 17:32:49.955403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x184300 00:23:30.251 [2024-11-09 17:32:49.955825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.251 [2024-11-09 17:32:49.955842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.955855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.955872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.955884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.955901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.955914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.955931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.955944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.955961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.955975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.955992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.956230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:23:30.252 [2024-11-09 17:32:49.956242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.958960] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:23:30.252 [2024-11-09 17:32:49.959005] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.252 [2024-11-09 17:32:49.959048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183900 00:23:30.252 [2024-11-09 17:32:49.959692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.959930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.959975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183a00 00:23:30.252 [2024-11-09 17:32:49.959988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.252 [2024-11-09 17:32:49.960005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183800 00:23:30.252 [2024-11-09 17:32:49.960017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183800 00:23:30.253 [2024-11-09 17:32:49.960362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183a00 00:23:30.253 [2024-11-09 17:32:49.960418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.960982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.960998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.961010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.253 [2024-11-09 17:32:49.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184300 00:23:30.253 [2024-11-09 17:32:49.961039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.961193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x184300 00:23:30.254 [2024-11-09 17:32:49.961205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.963699] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:23:30.254 [2024-11-09 17:32:49.963744] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.254 [2024-11-09 17:32:49.963789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.963824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.963872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.963906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.963949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.963983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183900 00:23:30.254 [2024-11-09 17:32:49.964306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183900 00:23:30.254 [2024-11-09 17:32:49.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183900 00:23:30.254 [2024-11-09 17:32:49.964515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183900 00:23:30.254 [2024-11-09 17:32:49.964542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183900 00:23:30.254 [2024-11-09 17:32:49.964625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x184000 00:23:30.254 [2024-11-09 17:32:49.964815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.254 [2024-11-09 17:32:49.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:23:30.254 [2024-11-09 17:32:49.964868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.964884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.964898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.964913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:23:30.255 [2024-11-09 17:32:49.964925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.964940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184400 00:23:30.255 [2024-11-09 17:32:49.964952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.964967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.964978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.964994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183900 00:23:30.255 [2024-11-09 17:32:49.965006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:23:30.255 [2024-11-09 17:32:49.965060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183900 00:23:30.255 [2024-11-09 17:32:49.965115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184400 00:23:30.255 [2024-11-09 17:32:49.965277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183900 00:23:30.255 [2024-11-09 17:32:49.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183900 00:23:30.255 [2024-11-09 17:32:49.965413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x184000 00:23:30.255 [2024-11-09 17:32:49.965472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:23:30.255 [2024-11-09 17:32:49.965500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b3d000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b5e000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b7f000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d983000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a4000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c5000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.255 [2024-11-09 17:32:49.965851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x184300 00:23:30.255 [2024-11-09 17:32:49.965863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.256 [2024-11-09 17:32:49.965878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x184300 00:23:30.256 [2024-11-09 17:32:49.965890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.256 [2024-11-09 17:32:49.965906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba51000 len:0x10000 key:0x184300 00:23:30.256 [2024-11-09 17:32:49.965917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9b071000 sqhd:5310 p:0 m:0 dnr:0 00:23:30.256 [2024-11-09 17:32:49.982106] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:23:30.256 [2024-11-09 17:32:49.982159] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982330] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982378] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982390] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982403] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982415] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982427] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982438] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982450] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982469] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.256 [2024-11-09 17:32:49.982481] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:30.515 task offset: 81280 on job bdev=Nvme1n1 fails 00:23:30.515 00:23:30.515 Latency(us) 00:23:30.515 [2024-11-09T16:32:50.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme1n1 ended in about 1.96 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme1n1 : 1.96 314.25 19.64 32.60 0.00 183622.90 42572.19 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme2n1 ended in about 1.97 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme2n1 : 1.97 318.93 19.93 32.50 0.00 180516.75 41104.18 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme3n1 ended in about 1.97 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme3n1 : 1.97 317.98 19.87 32.41 0.00 180427.05 41943.04 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme4n1 ended in about 1.98 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme4n1 : 1.98 319.15 19.95 32.32 0.00 179280.88 19503.51 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme5n1 ended in about 1.99 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme5n1 : 1.99 316.33 19.77 32.24 0.00 180235.36 45717.91 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme6n1 ended in about 1.99 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme6n1 : 1.99 315.44 19.71 32.15 0.00 180183.39 46556.77 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme7n1 ended in about 2.00 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme7n1 : 2.00 314.61 19.66 32.06 0.00 180079.75 47185.92 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme8n1 ended in about 2.00 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme8n1 : 2.00 313.83 19.61 31.98 0.00 179971.71 46976.20 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme9n1 ended in about 2.01 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme9n1 : 2.01 313.06 19.57 31.90 0.00 179871.09 45717.91 1093874.48 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.515 [2024-11-09T16:32:50.285Z] Job: Nvme10n1 ended in about 2.01 seconds with error 00:23:30.515 Verification LBA range: start 0x0 length 0x400 00:23:30.515 Nvme10n1 : 2.01 208.38 13.02 31.83 0.00 257520.93 44249.91 1087163.60 00:23:30.515 [2024-11-09T16:32:50.285Z] =================================================================================================================== 00:23:30.515 [2024-11-09T16:32:50.285Z] Total : 3051.96 190.75 321.99 0.00 186011.13 19503.51 1093874.48 00:23:30.515 [2024-11-09 17:32:50.004700] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:30.515 [2024-11-09 17:32:50.004727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:30.515 [2024-11-09 17:32:50.004910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:30.516 [2024-11-09 17:32:50.004920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:30.516 [2024-11-09 17:32:50.004932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:30.516 [2024-11-09 17:32:50.016123] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016150] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016167] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:23:30.516 [2024-11-09 17:32:50.016258] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016269] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016276] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:23:30.516 [2024-11-09 17:32:50.016346] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016356] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:23:30.516 [2024-11-09 17:32:50.016440] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016451] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:23:30.516 [2024-11-09 17:32:50.016612] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016624] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016631] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:23:30.516 [2024-11-09 17:32:50.016702] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016713] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016721] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:23:30.516 [2024-11-09 17:32:50.016785] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016796] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:23:30.516 [2024-11-09 17:32:50.016888] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016899] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.016907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:23:30.516 [2024-11-09 17:32:50.016988] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.016999] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.017007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:23:30.516 [2024-11-09 17:32:50.017099] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:30.516 [2024-11-09 17:32:50.017110] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:30.516 [2024-11-09 17:32:50.017117] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:23:30.776 17:32:50 -- target/shutdown.sh@141 -- # kill -9 2776248 00:23:30.776 17:32:50 -- target/shutdown.sh@143 -- # stoptarget 00:23:30.776 17:32:50 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:30.776 17:32:50 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.776 17:32:50 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.776 17:32:50 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:30.776 17:32:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:30.776 17:32:50 -- nvmf/common.sh@116 -- # sync 00:23:30.776 17:32:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:30.776 17:32:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:30.776 17:32:50 -- nvmf/common.sh@119 -- # set +e 00:23:30.776 17:32:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:30.776 17:32:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:30.776 rmmod nvme_rdma 00:23:30.776 rmmod nvme_fabrics 00:23:30.776 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 2776248 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:23:30.776 17:32:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:30.776 17:32:50 -- nvmf/common.sh@123 -- # set -e 00:23:30.776 17:32:50 -- nvmf/common.sh@124 -- # return 0 00:23:30.776 17:32:50 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:30.776 17:32:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:30.776 17:32:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:30.776 00:23:30.776 real 0m5.410s 00:23:30.776 user 0m18.497s 00:23:30.776 sys 0m1.354s 00:23:30.776 17:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:30.776 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:30.776 ************************************ 00:23:30.776 END TEST nvmf_shutdown_tc3 00:23:30.776 ************************************ 00:23:30.776 17:32:50 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:23:30.776 00:23:30.776 real 0m25.550s 00:23:30.776 user 1m15.533s 00:23:30.776 sys 0m9.088s 00:23:30.776 17:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:30.776 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:30.776 ************************************ 00:23:30.776 END TEST nvmf_shutdown 00:23:30.776 ************************************ 00:23:30.776 17:32:50 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:30.776 17:32:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.776 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.036 17:32:50 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:31.036 17:32:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.036 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.036 17:32:50 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:31.036 17:32:50 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:31.036 17:32:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:31.036 17:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:31.036 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.036 ************************************ 00:23:31.036 START TEST nvmf_multicontroller 00:23:31.036 ************************************ 00:23:31.036 17:32:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:31.036 * Looking for test storage... 00:23:31.036 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:31.036 17:32:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:31.036 17:32:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:31.036 17:32:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:31.036 17:32:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:31.036 17:32:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:31.036 17:32:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:31.036 17:32:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:31.036 17:32:50 -- scripts/common.sh@335 -- # IFS=.-: 00:23:31.036 17:32:50 -- scripts/common.sh@335 -- # read -ra ver1 00:23:31.036 17:32:50 -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.036 17:32:50 -- scripts/common.sh@336 -- # read -ra ver2 00:23:31.036 17:32:50 -- scripts/common.sh@337 -- # local 'op=<' 00:23:31.036 17:32:50 -- scripts/common.sh@339 -- # ver1_l=2 00:23:31.036 17:32:50 -- scripts/common.sh@340 -- # ver2_l=1 00:23:31.036 17:32:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:31.036 17:32:50 -- scripts/common.sh@343 -- # case "$op" in 00:23:31.036 17:32:50 -- scripts/common.sh@344 -- # : 1 00:23:31.036 17:32:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:31.036 17:32:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.036 17:32:50 -- scripts/common.sh@364 -- # decimal 1 00:23:31.036 17:32:50 -- scripts/common.sh@352 -- # local d=1 00:23:31.036 17:32:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.036 17:32:50 -- scripts/common.sh@354 -- # echo 1 00:23:31.036 17:32:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:31.036 17:32:50 -- scripts/common.sh@365 -- # decimal 2 00:23:31.036 17:32:50 -- scripts/common.sh@352 -- # local d=2 00:23:31.036 17:32:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.036 17:32:50 -- scripts/common.sh@354 -- # echo 2 00:23:31.036 17:32:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:31.036 17:32:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:31.036 17:32:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:31.036 17:32:50 -- scripts/common.sh@367 -- # return 0 00:23:31.036 17:32:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.036 17:32:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:31.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.036 --rc genhtml_branch_coverage=1 00:23:31.036 --rc genhtml_function_coverage=1 00:23:31.036 --rc genhtml_legend=1 00:23:31.036 --rc geninfo_all_blocks=1 00:23:31.036 --rc geninfo_unexecuted_blocks=1 00:23:31.036 00:23:31.036 ' 00:23:31.036 17:32:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.037 --rc genhtml_branch_coverage=1 00:23:31.037 --rc genhtml_function_coverage=1 00:23:31.037 --rc genhtml_legend=1 00:23:31.037 --rc geninfo_all_blocks=1 00:23:31.037 --rc geninfo_unexecuted_blocks=1 00:23:31.037 00:23:31.037 ' 00:23:31.037 17:32:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.037 --rc genhtml_branch_coverage=1 00:23:31.037 --rc genhtml_function_coverage=1 00:23:31.037 --rc genhtml_legend=1 00:23:31.037 --rc geninfo_all_blocks=1 00:23:31.037 --rc geninfo_unexecuted_blocks=1 00:23:31.037 00:23:31.037 ' 00:23:31.037 17:32:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:31.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.037 --rc genhtml_branch_coverage=1 00:23:31.037 --rc genhtml_function_coverage=1 00:23:31.037 --rc genhtml_legend=1 00:23:31.037 --rc geninfo_all_blocks=1 00:23:31.037 --rc geninfo_unexecuted_blocks=1 00:23:31.037 00:23:31.037 ' 00:23:31.037 17:32:50 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.037 17:32:50 -- nvmf/common.sh@7 -- # uname -s 00:23:31.037 17:32:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.037 17:32:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.037 17:32:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.037 17:32:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.037 17:32:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.037 17:32:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.037 17:32:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.037 17:32:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.037 17:32:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.037 17:32:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.037 17:32:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.037 17:32:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:31.037 17:32:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.037 17:32:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.037 17:32:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.037 17:32:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:31.037 17:32:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.037 17:32:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.037 17:32:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.037 17:32:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.037 17:32:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.037 17:32:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.037 17:32:50 -- paths/export.sh@5 -- # export PATH 00:23:31.037 17:32:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.037 17:32:50 -- nvmf/common.sh@46 -- # : 0 00:23:31.037 17:32:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:31.037 17:32:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:31.037 17:32:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:31.037 17:32:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.037 17:32:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.037 17:32:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:31.037 17:32:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:31.037 17:32:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:31.037 17:32:50 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.037 17:32:50 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.037 17:32:50 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:31.037 17:32:50 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:31.037 17:32:50 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.037 17:32:50 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:23:31.037 17:32:50 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:31.037 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:31.037 17:32:50 -- host/multicontroller.sh@20 -- # exit 0 00:23:31.037 00:23:31.037 real 0m0.220s 00:23:31.037 user 0m0.113s 00:23:31.037 sys 0m0.124s 00:23:31.037 17:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:31.037 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.037 ************************************ 00:23:31.037 END TEST nvmf_multicontroller 00:23:31.037 ************************************ 00:23:31.297 17:32:50 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:31.297 17:32:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:31.297 17:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:31.297 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:31.297 ************************************ 00:23:31.297 START TEST nvmf_aer 00:23:31.297 ************************************ 00:23:31.297 17:32:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:31.297 * Looking for test storage... 00:23:31.297 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:31.297 17:32:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:31.297 17:32:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:31.297 17:32:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:31.297 17:32:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:31.297 17:32:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:31.297 17:32:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:31.297 17:32:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:31.297 17:32:50 -- scripts/common.sh@335 -- # IFS=.-: 00:23:31.297 17:32:50 -- scripts/common.sh@335 -- # read -ra ver1 00:23:31.297 17:32:50 -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.297 17:32:50 -- scripts/common.sh@336 -- # read -ra ver2 00:23:31.297 17:32:50 -- scripts/common.sh@337 -- # local 'op=<' 00:23:31.297 17:32:50 -- scripts/common.sh@339 -- # ver1_l=2 00:23:31.297 17:32:50 -- scripts/common.sh@340 -- # ver2_l=1 00:23:31.297 17:32:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:31.297 17:32:50 -- scripts/common.sh@343 -- # case "$op" in 00:23:31.297 17:32:50 -- scripts/common.sh@344 -- # : 1 00:23:31.297 17:32:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:31.297 17:32:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.297 17:32:50 -- scripts/common.sh@364 -- # decimal 1 00:23:31.297 17:32:51 -- scripts/common.sh@352 -- # local d=1 00:23:31.297 17:32:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.297 17:32:51 -- scripts/common.sh@354 -- # echo 1 00:23:31.297 17:32:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:31.297 17:32:51 -- scripts/common.sh@365 -- # decimal 2 00:23:31.297 17:32:51 -- scripts/common.sh@352 -- # local d=2 00:23:31.297 17:32:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.297 17:32:51 -- scripts/common.sh@354 -- # echo 2 00:23:31.297 17:32:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:31.297 17:32:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:31.297 17:32:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:31.297 17:32:51 -- scripts/common.sh@367 -- # return 0 00:23:31.297 17:32:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.297 17:32:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.297 --rc genhtml_branch_coverage=1 00:23:31.297 --rc genhtml_function_coverage=1 00:23:31.297 --rc genhtml_legend=1 00:23:31.297 --rc geninfo_all_blocks=1 00:23:31.297 --rc geninfo_unexecuted_blocks=1 00:23:31.297 00:23:31.297 ' 00:23:31.297 17:32:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.297 --rc genhtml_branch_coverage=1 00:23:31.297 --rc genhtml_function_coverage=1 00:23:31.297 --rc genhtml_legend=1 00:23:31.297 --rc geninfo_all_blocks=1 00:23:31.297 --rc geninfo_unexecuted_blocks=1 00:23:31.297 00:23:31.297 ' 00:23:31.297 17:32:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.297 --rc genhtml_branch_coverage=1 00:23:31.297 --rc genhtml_function_coverage=1 00:23:31.297 --rc genhtml_legend=1 00:23:31.297 --rc geninfo_all_blocks=1 00:23:31.297 --rc geninfo_unexecuted_blocks=1 00:23:31.297 00:23:31.297 ' 00:23:31.297 17:32:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.297 --rc genhtml_branch_coverage=1 00:23:31.297 --rc genhtml_function_coverage=1 00:23:31.297 --rc genhtml_legend=1 00:23:31.297 --rc geninfo_all_blocks=1 00:23:31.297 --rc geninfo_unexecuted_blocks=1 00:23:31.297 00:23:31.297 ' 00:23:31.297 17:32:51 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.297 17:32:51 -- nvmf/common.sh@7 -- # uname -s 00:23:31.297 17:32:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.297 17:32:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.297 17:32:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.297 17:32:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.297 17:32:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.297 17:32:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.297 17:32:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.297 17:32:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.297 17:32:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.297 17:32:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.297 17:32:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.297 17:32:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:31.297 17:32:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.297 17:32:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.297 17:32:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.298 17:32:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:31.298 17:32:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.298 17:32:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.298 17:32:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.298 17:32:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.298 17:32:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.298 17:32:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.298 17:32:51 -- paths/export.sh@5 -- # export PATH 00:23:31.298 17:32:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.298 17:32:51 -- nvmf/common.sh@46 -- # : 0 00:23:31.298 17:32:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:31.298 17:32:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:31.298 17:32:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:31.298 17:32:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.298 17:32:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.298 17:32:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:31.298 17:32:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:31.298 17:32:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:31.298 17:32:51 -- host/aer.sh@11 -- # nvmftestinit 00:23:31.298 17:32:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:31.298 17:32:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.298 17:32:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:31.298 17:32:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:31.298 17:32:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:31.298 17:32:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.298 17:32:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.298 17:32:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.298 17:32:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:31.298 17:32:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:31.298 17:32:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:31.298 17:32:51 -- common/autotest_common.sh@10 -- # set +x 00:23:37.870 17:32:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:37.870 17:32:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:37.870 17:32:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:37.870 17:32:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:37.870 17:32:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:37.870 17:32:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:37.870 17:32:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:37.870 17:32:56 -- nvmf/common.sh@294 -- # net_devs=() 00:23:37.870 17:32:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:37.870 17:32:56 -- nvmf/common.sh@295 -- # e810=() 00:23:37.870 17:32:56 -- nvmf/common.sh@295 -- # local -ga e810 00:23:37.870 17:32:56 -- nvmf/common.sh@296 -- # x722=() 00:23:37.870 17:32:56 -- nvmf/common.sh@296 -- # local -ga x722 00:23:37.870 17:32:56 -- nvmf/common.sh@297 -- # mlx=() 00:23:37.870 17:32:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:37.870 17:32:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.870 17:32:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:37.870 17:32:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:37.870 17:32:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:37.870 17:32:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:37.870 17:32:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:37.870 17:32:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:37.870 17:32:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:37.870 17:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:37.870 17:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:37.870 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:37.871 17:32:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:37.871 17:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:37.871 17:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:37.871 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:37.871 17:32:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:37.871 17:32:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:37.871 17:32:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:37.871 17:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.871 17:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:37.871 17:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.871 17:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:37.871 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:37.871 17:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.871 17:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:37.871 17:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.871 17:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:37.871 17:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.871 17:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:37.871 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:37.871 17:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.871 17:32:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:37.871 17:32:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:37.871 17:32:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:37.871 17:32:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:37.871 17:32:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:37.871 17:32:56 -- nvmf/common.sh@57 -- # uname 00:23:37.871 17:32:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:37.871 17:32:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:37.871 17:32:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:37.871 17:32:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:37.871 17:32:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:37.871 17:32:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:37.871 17:32:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:37.871 17:32:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:37.871 17:32:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:37.871 17:32:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:37.871 17:32:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:37.871 17:32:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:37.871 17:32:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:37.871 17:32:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:37.871 17:32:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:37.871 17:32:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:37.871 17:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:37.871 17:32:57 -- nvmf/common.sh@104 -- # continue 2 00:23:37.871 17:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:37.871 17:32:57 -- nvmf/common.sh@104 -- # continue 2 00:23:37.871 17:32:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:37.871 17:32:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:37.871 17:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:37.871 17:32:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:37.871 17:32:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:37.871 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:37.871 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:37.871 altname enp217s0f0np0 00:23:37.871 altname ens818f0np0 00:23:37.871 inet 192.168.100.8/24 scope global mlx_0_0 00:23:37.871 valid_lft forever preferred_lft forever 00:23:37.871 17:32:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:37.871 17:32:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:37.871 17:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:37.871 17:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:37.871 17:32:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:37.871 17:32:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:37.871 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:37.871 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:37.871 altname enp217s0f1np1 00:23:37.871 altname ens818f1np1 00:23:37.871 inet 192.168.100.9/24 scope global mlx_0_1 00:23:37.871 valid_lft forever preferred_lft forever 00:23:37.871 17:32:57 -- nvmf/common.sh@410 -- # return 0 00:23:37.871 17:32:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:37.871 17:32:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:37.871 17:32:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:37.871 17:32:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:37.871 17:32:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:37.871 17:32:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:37.871 17:32:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:37.871 17:32:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:37.871 17:32:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:37.871 17:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:37.871 17:32:57 -- nvmf/common.sh@104 -- # continue 2 00:23:37.871 17:32:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:37.871 17:32:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:37.871 17:32:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:37.871 17:32:57 -- nvmf/common.sh@104 -- # continue 2 00:23:37.872 17:32:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:37.872 17:32:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:37.872 17:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:37.872 17:32:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:37.872 17:32:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:37.872 17:32:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:37.872 17:32:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:37.872 17:32:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:37.872 192.168.100.9' 00:23:37.872 17:32:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:37.872 192.168.100.9' 00:23:37.872 17:32:57 -- nvmf/common.sh@445 -- # head -n 1 00:23:37.872 17:32:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:37.872 17:32:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:37.872 192.168.100.9' 00:23:37.872 17:32:57 -- nvmf/common.sh@446 -- # tail -n +2 00:23:37.872 17:32:57 -- nvmf/common.sh@446 -- # head -n 1 00:23:37.872 17:32:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:37.872 17:32:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:37.872 17:32:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:37.872 17:32:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:37.872 17:32:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:37.872 17:32:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:37.872 17:32:57 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:37.872 17:32:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:37.872 17:32:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:37.872 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:23:37.872 17:32:57 -- nvmf/common.sh@469 -- # nvmfpid=2780312 00:23:37.872 17:32:57 -- nvmf/common.sh@470 -- # waitforlisten 2780312 00:23:37.872 17:32:57 -- common/autotest_common.sh@829 -- # '[' -z 2780312 ']' 00:23:37.872 17:32:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.872 17:32:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.872 17:32:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.872 17:32:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.872 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:23:37.872 17:32:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.872 [2024-11-09 17:32:57.267824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.872 [2024-11-09 17:32:57.267878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.872 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.872 [2024-11-09 17:32:57.338378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.872 [2024-11-09 17:32:57.413789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:37.872 [2024-11-09 17:32:57.413896] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.872 [2024-11-09 17:32:57.413907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.872 [2024-11-09 17:32:57.413915] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.872 [2024-11-09 17:32:57.413962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.872 [2024-11-09 17:32:57.414061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.872 [2024-11-09 17:32:57.414081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.872 [2024-11-09 17:32:57.414083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.441 17:32:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.441 17:32:58 -- common/autotest_common.sh@862 -- # return 0 00:23:38.441 17:32:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:38.441 17:32:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:38.441 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.441 17:32:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.441 17:32:58 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:38.441 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.441 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.441 [2024-11-09 17:32:58.173550] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c4b090/0x1c4f580) succeed. 00:23:38.441 [2024-11-09 17:32:58.182746] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c4c680/0x1c90c20) succeed. 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:38.700 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.700 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 Malloc0 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:38.700 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.700 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.700 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.700 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:38.700 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.700 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 [2024-11-09 17:32:58.352992] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:38.700 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.700 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 [2024-11-09 17:32:58.360719] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:38.700 [ 00:23:38.700 { 00:23:38.700 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.700 "subtype": "Discovery", 00:23:38.700 "listen_addresses": [], 00:23:38.700 "allow_any_host": true, 00:23:38.700 "hosts": [] 00:23:38.700 }, 00:23:38.700 { 00:23:38.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.700 "subtype": "NVMe", 00:23:38.700 "listen_addresses": [ 00:23:38.700 { 00:23:38.700 "transport": "RDMA", 00:23:38.700 "trtype": "RDMA", 00:23:38.700 "adrfam": "IPv4", 00:23:38.700 "traddr": "192.168.100.8", 00:23:38.700 "trsvcid": "4420" 00:23:38.700 } 00:23:38.700 ], 00:23:38.700 "allow_any_host": true, 00:23:38.700 "hosts": [], 00:23:38.700 "serial_number": "SPDK00000000000001", 00:23:38.700 "model_number": "SPDK bdev Controller", 00:23:38.700 "max_namespaces": 2, 00:23:38.700 "min_cntlid": 1, 00:23:38.700 "max_cntlid": 65519, 00:23:38.700 "namespaces": [ 00:23:38.700 { 00:23:38.700 "nsid": 1, 00:23:38.700 "bdev_name": "Malloc0", 00:23:38.700 "name": "Malloc0", 00:23:38.700 "nguid": "DBD5D79277A0498A8C55518F064965DC", 00:23:38.700 "uuid": "dbd5d792-77a0-498a-8c55-518f064965dc" 00:23:38.700 } 00:23:38.700 ] 00:23:38.700 } 00:23:38.700 ] 00:23:38.700 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.700 17:32:58 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:38.700 17:32:58 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:38.700 17:32:58 -- host/aer.sh@33 -- # aerpid=2780403 00:23:38.700 17:32:58 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:38.700 17:32:58 -- common/autotest_common.sh@1254 -- # local i=0 00:23:38.700 17:32:58 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.700 17:32:58 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:38.700 17:32:58 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:23:38.700 17:32:58 -- common/autotest_common.sh@1257 -- # i=1 00:23:38.700 17:32:58 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:38.700 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.959 17:32:58 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.960 17:32:58 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:23:38.960 17:32:58 -- common/autotest_common.sh@1257 -- # i=2 00:23:38.960 17:32:58 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:38.960 17:32:58 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.960 17:32:58 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.960 17:32:58 -- common/autotest_common.sh@1265 -- # return 0 00:23:38.960 17:32:58 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:38.960 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.960 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.960 Malloc1 00:23:38.960 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.960 17:32:58 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:38.960 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.960 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.960 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.960 17:32:58 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:38.960 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.960 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.960 [ 00:23:38.960 { 00:23:38.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.960 "subtype": "Discovery", 00:23:38.960 "listen_addresses": [], 00:23:38.960 "allow_any_host": true, 00:23:38.960 "hosts": [] 00:23:38.960 }, 00:23:38.960 { 00:23:38.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.960 "subtype": "NVMe", 00:23:38.960 "listen_addresses": [ 00:23:38.960 { 00:23:38.960 "transport": "RDMA", 00:23:38.960 "trtype": "RDMA", 00:23:38.960 "adrfam": "IPv4", 00:23:38.960 "traddr": "192.168.100.8", 00:23:38.960 "trsvcid": "4420" 00:23:38.960 } 00:23:38.960 ], 00:23:38.960 "allow_any_host": true, 00:23:38.960 "hosts": [], 00:23:38.960 "serial_number": "SPDK00000000000001", 00:23:38.960 "model_number": "SPDK bdev Controller", 00:23:38.960 "max_namespaces": 2, 00:23:38.960 "min_cntlid": 1, 00:23:38.960 "max_cntlid": 65519, 00:23:38.960 "namespaces": [ 00:23:38.960 { 00:23:38.960 "nsid": 1, 00:23:38.960 "bdev_name": "Malloc0", 00:23:38.960 "name": "Malloc0", 00:23:38.960 "nguid": "DBD5D79277A0498A8C55518F064965DC", 00:23:38.960 "uuid": "dbd5d792-77a0-498a-8c55-518f064965dc" 00:23:38.960 }, 00:23:38.960 { 00:23:38.960 "nsid": 2, 00:23:38.960 "bdev_name": "Malloc1", 00:23:38.960 "name": "Malloc1", 00:23:38.960 "nguid": "241A8AE1DE2F4EE5B2D149EB26F05E4D", 00:23:38.960 "uuid": "241a8ae1-de2f-4ee5-b2d1-49eb26f05e4d" 00:23:38.960 } 00:23:38.960 ] 00:23:38.960 } 00:23:38.960 ] 00:23:38.960 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.960 17:32:58 -- host/aer.sh@43 -- # wait 2780403 00:23:38.960 Asynchronous Event Request test 00:23:38.960 Attaching to 192.168.100.8 00:23:38.960 Attached to 192.168.100.8 00:23:38.960 Registering asynchronous event callbacks... 00:23:38.960 Starting namespace attribute notice tests for all controllers... 00:23:38.960 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:38.960 aer_cb - Changed Namespace 00:23:38.960 Cleaning up... 00:23:38.960 17:32:58 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:38.960 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.960 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.960 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.960 17:32:58 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:38.960 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.960 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:39.219 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.219 17:32:58 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.219 17:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.219 17:32:58 -- common/autotest_common.sh@10 -- # set +x 00:23:39.219 17:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.219 17:32:58 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:39.219 17:32:58 -- host/aer.sh@51 -- # nvmftestfini 00:23:39.219 17:32:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:39.219 17:32:58 -- nvmf/common.sh@116 -- # sync 00:23:39.219 17:32:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:39.219 17:32:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:39.219 17:32:58 -- nvmf/common.sh@119 -- # set +e 00:23:39.219 17:32:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:39.219 17:32:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:39.219 rmmod nvme_rdma 00:23:39.219 rmmod nvme_fabrics 00:23:39.219 17:32:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:39.219 17:32:58 -- nvmf/common.sh@123 -- # set -e 00:23:39.219 17:32:58 -- nvmf/common.sh@124 -- # return 0 00:23:39.219 17:32:58 -- nvmf/common.sh@477 -- # '[' -n 2780312 ']' 00:23:39.219 17:32:58 -- nvmf/common.sh@478 -- # killprocess 2780312 00:23:39.219 17:32:58 -- common/autotest_common.sh@936 -- # '[' -z 2780312 ']' 00:23:39.219 17:32:58 -- common/autotest_common.sh@940 -- # kill -0 2780312 00:23:39.220 17:32:58 -- common/autotest_common.sh@941 -- # uname 00:23:39.220 17:32:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:39.220 17:32:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2780312 00:23:39.220 17:32:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:39.220 17:32:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:39.220 17:32:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2780312' 00:23:39.220 killing process with pid 2780312 00:23:39.220 17:32:58 -- common/autotest_common.sh@955 -- # kill 2780312 00:23:39.220 [2024-11-09 17:32:58.853513] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:39.220 17:32:58 -- common/autotest_common.sh@960 -- # wait 2780312 00:23:39.479 17:32:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:39.479 17:32:59 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:39.479 00:23:39.479 real 0m8.293s 00:23:39.479 user 0m8.404s 00:23:39.479 sys 0m5.353s 00:23:39.479 17:32:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.479 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:23:39.479 ************************************ 00:23:39.479 END TEST nvmf_aer 00:23:39.479 ************************************ 00:23:39.479 17:32:59 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:39.479 17:32:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.479 17:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.479 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:23:39.479 ************************************ 00:23:39.479 START TEST nvmf_async_init 00:23:39.479 ************************************ 00:23:39.479 17:32:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:39.739 * Looking for test storage... 00:23:39.739 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:39.739 17:32:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:39.739 17:32:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:39.739 17:32:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:39.739 17:32:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:39.739 17:32:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:39.739 17:32:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:39.739 17:32:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:39.739 17:32:59 -- scripts/common.sh@335 -- # IFS=.-: 00:23:39.739 17:32:59 -- scripts/common.sh@335 -- # read -ra ver1 00:23:39.739 17:32:59 -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.739 17:32:59 -- scripts/common.sh@336 -- # read -ra ver2 00:23:39.739 17:32:59 -- scripts/common.sh@337 -- # local 'op=<' 00:23:39.739 17:32:59 -- scripts/common.sh@339 -- # ver1_l=2 00:23:39.739 17:32:59 -- scripts/common.sh@340 -- # ver2_l=1 00:23:39.739 17:32:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:39.739 17:32:59 -- scripts/common.sh@343 -- # case "$op" in 00:23:39.739 17:32:59 -- scripts/common.sh@344 -- # : 1 00:23:39.739 17:32:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:39.739 17:32:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.739 17:32:59 -- scripts/common.sh@364 -- # decimal 1 00:23:39.739 17:32:59 -- scripts/common.sh@352 -- # local d=1 00:23:39.739 17:32:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.739 17:32:59 -- scripts/common.sh@354 -- # echo 1 00:23:39.739 17:32:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:39.739 17:32:59 -- scripts/common.sh@365 -- # decimal 2 00:23:39.739 17:32:59 -- scripts/common.sh@352 -- # local d=2 00:23:39.739 17:32:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.739 17:32:59 -- scripts/common.sh@354 -- # echo 2 00:23:39.739 17:32:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:39.739 17:32:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:39.739 17:32:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:39.739 17:32:59 -- scripts/common.sh@367 -- # return 0 00:23:39.739 17:32:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.739 17:32:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.739 --rc genhtml_branch_coverage=1 00:23:39.739 --rc genhtml_function_coverage=1 00:23:39.739 --rc genhtml_legend=1 00:23:39.739 --rc geninfo_all_blocks=1 00:23:39.739 --rc geninfo_unexecuted_blocks=1 00:23:39.739 00:23:39.739 ' 00:23:39.739 17:32:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.739 --rc genhtml_branch_coverage=1 00:23:39.739 --rc genhtml_function_coverage=1 00:23:39.739 --rc genhtml_legend=1 00:23:39.739 --rc geninfo_all_blocks=1 00:23:39.739 --rc geninfo_unexecuted_blocks=1 00:23:39.739 00:23:39.739 ' 00:23:39.739 17:32:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.739 --rc genhtml_branch_coverage=1 00:23:39.739 --rc genhtml_function_coverage=1 00:23:39.739 --rc genhtml_legend=1 00:23:39.739 --rc geninfo_all_blocks=1 00:23:39.739 --rc geninfo_unexecuted_blocks=1 00:23:39.739 00:23:39.739 ' 00:23:39.739 17:32:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.739 --rc genhtml_branch_coverage=1 00:23:39.739 --rc genhtml_function_coverage=1 00:23:39.739 --rc genhtml_legend=1 00:23:39.739 --rc geninfo_all_blocks=1 00:23:39.739 --rc geninfo_unexecuted_blocks=1 00:23:39.739 00:23:39.739 ' 00:23:39.739 17:32:59 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.739 17:32:59 -- nvmf/common.sh@7 -- # uname -s 00:23:39.739 17:32:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.739 17:32:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.739 17:32:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.739 17:32:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.739 17:32:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.739 17:32:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.739 17:32:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.739 17:32:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.739 17:32:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.740 17:32:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.740 17:32:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:39.740 17:32:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:39.740 17:32:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.740 17:32:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.740 17:32:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.740 17:32:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:39.740 17:32:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.740 17:32:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.740 17:32:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.740 17:32:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.740 17:32:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.740 17:32:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.740 17:32:59 -- paths/export.sh@5 -- # export PATH 00:23:39.740 17:32:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.740 17:32:59 -- nvmf/common.sh@46 -- # : 0 00:23:39.740 17:32:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:39.740 17:32:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:39.740 17:32:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:39.740 17:32:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.740 17:32:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.740 17:32:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:39.740 17:32:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:39.740 17:32:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:39.740 17:32:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:39.740 17:32:59 -- host/async_init.sh@14 -- # null_block_size=512 00:23:39.740 17:32:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:39.740 17:32:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:39.740 17:32:59 -- host/async_init.sh@20 -- # uuidgen 00:23:39.740 17:32:59 -- host/async_init.sh@20 -- # tr -d - 00:23:39.740 17:32:59 -- host/async_init.sh@20 -- # nguid=7f240ba1e92341fba0ec4ecd0246e0c2 00:23:39.740 17:32:59 -- host/async_init.sh@22 -- # nvmftestinit 00:23:39.740 17:32:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:39.740 17:32:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.740 17:32:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:39.740 17:32:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:39.740 17:32:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:39.740 17:32:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.740 17:32:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.740 17:32:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.740 17:32:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:39.740 17:32:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:39.740 17:32:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:39.740 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:23:46.318 17:33:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:46.318 17:33:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:46.318 17:33:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:46.318 17:33:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:46.318 17:33:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:46.318 17:33:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:46.318 17:33:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:46.318 17:33:05 -- nvmf/common.sh@294 -- # net_devs=() 00:23:46.318 17:33:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:46.318 17:33:05 -- nvmf/common.sh@295 -- # e810=() 00:23:46.318 17:33:05 -- nvmf/common.sh@295 -- # local -ga e810 00:23:46.318 17:33:05 -- nvmf/common.sh@296 -- # x722=() 00:23:46.318 17:33:05 -- nvmf/common.sh@296 -- # local -ga x722 00:23:46.318 17:33:05 -- nvmf/common.sh@297 -- # mlx=() 00:23:46.318 17:33:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:46.318 17:33:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.318 17:33:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:46.318 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:46.318 17:33:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.318 17:33:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:46.318 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:46.318 17:33:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.318 17:33:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.318 17:33:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.318 17:33:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:46.318 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:46.318 17:33:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.318 17:33:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.318 17:33:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:46.318 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:46.318 17:33:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.318 17:33:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:46.318 17:33:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:46.318 17:33:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:46.318 17:33:05 -- nvmf/common.sh@57 -- # uname 00:23:46.318 17:33:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:46.318 17:33:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:46.318 17:33:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:46.318 17:33:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:46.318 17:33:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:46.318 17:33:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:46.318 17:33:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:46.318 17:33:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:46.318 17:33:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:46.318 17:33:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:46.318 17:33:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:46.318 17:33:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.318 17:33:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:46.318 17:33:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:46.318 17:33:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.318 17:33:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:46.318 17:33:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:46.318 17:33:05 -- nvmf/common.sh@104 -- # continue 2 00:23:46.318 17:33:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.318 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:46.318 17:33:05 -- nvmf/common.sh@104 -- # continue 2 00:23:46.318 17:33:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:46.318 17:33:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:46.318 17:33:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:46.318 17:33:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:46.318 17:33:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:46.318 17:33:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:46.318 17:33:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:46.318 17:33:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:46.318 17:33:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:46.318 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.318 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:46.318 altname enp217s0f0np0 00:23:46.318 altname ens818f0np0 00:23:46.318 inet 192.168.100.8/24 scope global mlx_0_0 00:23:46.318 valid_lft forever preferred_lft forever 00:23:46.318 17:33:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:46.318 17:33:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:46.318 17:33:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:46.318 17:33:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:46.318 17:33:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:46.319 17:33:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:46.319 17:33:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:46.319 17:33:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:46.319 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.319 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:46.319 altname enp217s0f1np1 00:23:46.319 altname ens818f1np1 00:23:46.319 inet 192.168.100.9/24 scope global mlx_0_1 00:23:46.319 valid_lft forever preferred_lft forever 00:23:46.319 17:33:05 -- nvmf/common.sh@410 -- # return 0 00:23:46.319 17:33:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:46.319 17:33:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:46.319 17:33:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:46.319 17:33:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:46.319 17:33:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:46.319 17:33:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.319 17:33:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:46.319 17:33:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:46.319 17:33:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.319 17:33:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:46.319 17:33:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:46.319 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.319 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.319 17:33:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:46.319 17:33:05 -- nvmf/common.sh@104 -- # continue 2 00:23:46.319 17:33:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:46.319 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.319 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.319 17:33:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.319 17:33:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.319 17:33:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:46.319 17:33:05 -- nvmf/common.sh@104 -- # continue 2 00:23:46.319 17:33:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:46.319 17:33:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:46.319 17:33:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:46.319 17:33:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:46.319 17:33:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:46.319 17:33:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:46.319 17:33:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:46.319 17:33:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:46.319 192.168.100.9' 00:23:46.319 17:33:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:46.319 192.168.100.9' 00:23:46.319 17:33:05 -- nvmf/common.sh@445 -- # head -n 1 00:23:46.319 17:33:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:46.319 17:33:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:46.319 192.168.100.9' 00:23:46.319 17:33:05 -- nvmf/common.sh@446 -- # head -n 1 00:23:46.319 17:33:05 -- nvmf/common.sh@446 -- # tail -n +2 00:23:46.319 17:33:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:46.319 17:33:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:46.319 17:33:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:46.319 17:33:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:46.319 17:33:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:46.319 17:33:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:46.319 17:33:05 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:46.319 17:33:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:46.319 17:33:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.319 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:23:46.319 17:33:05 -- nvmf/common.sh@469 -- # nvmfpid=2784365 00:23:46.319 17:33:05 -- nvmf/common.sh@470 -- # waitforlisten 2784365 00:23:46.319 17:33:05 -- common/autotest_common.sh@829 -- # '[' -z 2784365 ']' 00:23:46.319 17:33:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.319 17:33:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.319 17:33:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.319 17:33:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.319 17:33:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:46.319 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:23:46.319 [2024-11-09 17:33:05.951934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:46.319 [2024-11-09 17:33:05.951983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.319 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.319 [2024-11-09 17:33:06.020423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.578 [2024-11-09 17:33:06.090302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:46.578 [2024-11-09 17:33:06.090412] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.578 [2024-11-09 17:33:06.090422] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.578 [2024-11-09 17:33:06.090431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.578 [2024-11-09 17:33:06.090459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.147 17:33:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.147 17:33:06 -- common/autotest_common.sh@862 -- # return 0 00:23:47.147 17:33:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:47.147 17:33:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.147 17:33:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.147 17:33:06 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:47.147 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.147 [2024-11-09 17:33:06.835020] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb0ff30/0xb14420) succeed. 00:23:47.147 [2024-11-09 17:33:06.843993] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb11430/0xb55ac0) succeed. 00:23:47.147 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.147 17:33:06 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:47.147 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.147 null0 00:23:47.147 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.147 17:33:06 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:47.147 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.147 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.147 17:33:06 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:47.147 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.147 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.147 17:33:06 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7f240ba1e92341fba0ec4ecd0246e0c2 00:23:47.147 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.147 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.406 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:06 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:47.407 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 [2024-11-09 17:33:06.924333] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:47.407 17:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:06 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:47.407 17:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 nvme0n1 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 [ 00:23:47.407 { 00:23:47.407 "name": "nvme0n1", 00:23:47.407 "aliases": [ 00:23:47.407 "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2" 00:23:47.407 ], 00:23:47.407 "product_name": "NVMe disk", 00:23:47.407 "block_size": 512, 00:23:47.407 "num_blocks": 2097152, 00:23:47.407 "uuid": "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2", 00:23:47.407 "assigned_rate_limits": { 00:23:47.407 "rw_ios_per_sec": 0, 00:23:47.407 "rw_mbytes_per_sec": 0, 00:23:47.407 "r_mbytes_per_sec": 0, 00:23:47.407 "w_mbytes_per_sec": 0 00:23:47.407 }, 00:23:47.407 "claimed": false, 00:23:47.407 "zoned": false, 00:23:47.407 "supported_io_types": { 00:23:47.407 "read": true, 00:23:47.407 "write": true, 00:23:47.407 "unmap": false, 00:23:47.407 "write_zeroes": true, 00:23:47.407 "flush": true, 00:23:47.407 "reset": true, 00:23:47.407 "compare": true, 00:23:47.407 "compare_and_write": true, 00:23:47.407 "abort": true, 00:23:47.407 "nvme_admin": true, 00:23:47.407 "nvme_io": true 00:23:47.407 }, 00:23:47.407 "memory_domains": [ 00:23:47.407 { 00:23:47.407 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:47.407 "dma_device_type": 0 00:23:47.407 } 00:23:47.407 ], 00:23:47.407 "driver_specific": { 00:23:47.407 "nvme": [ 00:23:47.407 { 00:23:47.407 "trid": { 00:23:47.407 "trtype": "RDMA", 00:23:47.407 "adrfam": "IPv4", 00:23:47.407 "traddr": "192.168.100.8", 00:23:47.407 "trsvcid": "4420", 00:23:47.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:47.407 }, 00:23:47.407 "ctrlr_data": { 00:23:47.407 "cntlid": 1, 00:23:47.407 "vendor_id": "0x8086", 00:23:47.407 "model_number": "SPDK bdev Controller", 00:23:47.407 "serial_number": "00000000000000000000", 00:23:47.407 "firmware_revision": "24.01.1", 00:23:47.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.407 "oacs": { 00:23:47.407 "security": 0, 00:23:47.407 "format": 0, 00:23:47.407 "firmware": 0, 00:23:47.407 "ns_manage": 0 00:23:47.407 }, 00:23:47.407 "multi_ctrlr": true, 00:23:47.407 "ana_reporting": false 00:23:47.407 }, 00:23:47.407 "vs": { 00:23:47.407 "nvme_version": "1.3" 00:23:47.407 }, 00:23:47.407 "ns_data": { 00:23:47.407 "id": 1, 00:23:47.407 "can_share": true 00:23:47.407 } 00:23:47.407 } 00:23:47.407 ], 00:23:47.407 "mp_policy": "active_passive" 00:23:47.407 } 00:23:47.407 } 00:23:47.407 ] 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 [2024-11-09 17:33:07.036169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:47.407 [2024-11-09 17:33:07.058719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:47.407 [2024-11-09 17:33:07.082051] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 [ 00:23:47.407 { 00:23:47.407 "name": "nvme0n1", 00:23:47.407 "aliases": [ 00:23:47.407 "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2" 00:23:47.407 ], 00:23:47.407 "product_name": "NVMe disk", 00:23:47.407 "block_size": 512, 00:23:47.407 "num_blocks": 2097152, 00:23:47.407 "uuid": "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2", 00:23:47.407 "assigned_rate_limits": { 00:23:47.407 "rw_ios_per_sec": 0, 00:23:47.407 "rw_mbytes_per_sec": 0, 00:23:47.407 "r_mbytes_per_sec": 0, 00:23:47.407 "w_mbytes_per_sec": 0 00:23:47.407 }, 00:23:47.407 "claimed": false, 00:23:47.407 "zoned": false, 00:23:47.407 "supported_io_types": { 00:23:47.407 "read": true, 00:23:47.407 "write": true, 00:23:47.407 "unmap": false, 00:23:47.407 "write_zeroes": true, 00:23:47.407 "flush": true, 00:23:47.407 "reset": true, 00:23:47.407 "compare": true, 00:23:47.407 "compare_and_write": true, 00:23:47.407 "abort": true, 00:23:47.407 "nvme_admin": true, 00:23:47.407 "nvme_io": true 00:23:47.407 }, 00:23:47.407 "memory_domains": [ 00:23:47.407 { 00:23:47.407 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:47.407 "dma_device_type": 0 00:23:47.407 } 00:23:47.407 ], 00:23:47.407 "driver_specific": { 00:23:47.407 "nvme": [ 00:23:47.407 { 00:23:47.407 "trid": { 00:23:47.407 "trtype": "RDMA", 00:23:47.407 "adrfam": "IPv4", 00:23:47.407 "traddr": "192.168.100.8", 00:23:47.407 "trsvcid": "4420", 00:23:47.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:47.407 }, 00:23:47.407 "ctrlr_data": { 00:23:47.407 "cntlid": 2, 00:23:47.407 "vendor_id": "0x8086", 00:23:47.407 "model_number": "SPDK bdev Controller", 00:23:47.407 "serial_number": "00000000000000000000", 00:23:47.407 "firmware_revision": "24.01.1", 00:23:47.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.407 "oacs": { 00:23:47.407 "security": 0, 00:23:47.407 "format": 0, 00:23:47.407 "firmware": 0, 00:23:47.407 "ns_manage": 0 00:23:47.407 }, 00:23:47.407 "multi_ctrlr": true, 00:23:47.407 "ana_reporting": false 00:23:47.407 }, 00:23:47.407 "vs": { 00:23:47.407 "nvme_version": "1.3" 00:23:47.407 }, 00:23:47.407 "ns_data": { 00:23:47.407 "id": 1, 00:23:47.407 "can_share": true 00:23:47.407 } 00:23:47.407 } 00:23:47.407 ], 00:23:47.407 "mp_policy": "active_passive" 00:23:47.407 } 00:23:47.407 } 00:23:47.407 ] 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@53 -- # mktemp 00:23:47.407 17:33:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.d1ebU88DHw 00:23:47.407 17:33:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:47.407 17:33:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.d1ebU88DHw 00:23:47.407 17:33:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 [2024-11-09 17:33:07.148696] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.407 17:33:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d1ebU88DHw 00:23:47.407 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.407 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.407 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.408 17:33:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d1ebU88DHw 00:23:47.408 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.408 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.408 [2024-11-09 17:33:07.164722] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.667 nvme0n1 00:23:47.667 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.667 17:33:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.667 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.667 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.667 [ 00:23:47.667 { 00:23:47.667 "name": "nvme0n1", 00:23:47.667 "aliases": [ 00:23:47.667 "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2" 00:23:47.667 ], 00:23:47.667 "product_name": "NVMe disk", 00:23:47.667 "block_size": 512, 00:23:47.667 "num_blocks": 2097152, 00:23:47.667 "uuid": "7f240ba1-e923-41fb-a0ec-4ecd0246e0c2", 00:23:47.667 "assigned_rate_limits": { 00:23:47.667 "rw_ios_per_sec": 0, 00:23:47.667 "rw_mbytes_per_sec": 0, 00:23:47.667 "r_mbytes_per_sec": 0, 00:23:47.667 "w_mbytes_per_sec": 0 00:23:47.667 }, 00:23:47.667 "claimed": false, 00:23:47.667 "zoned": false, 00:23:47.667 "supported_io_types": { 00:23:47.667 "read": true, 00:23:47.667 "write": true, 00:23:47.667 "unmap": false, 00:23:47.667 "write_zeroes": true, 00:23:47.667 "flush": true, 00:23:47.667 "reset": true, 00:23:47.667 "compare": true, 00:23:47.667 "compare_and_write": true, 00:23:47.667 "abort": true, 00:23:47.667 "nvme_admin": true, 00:23:47.667 "nvme_io": true 00:23:47.667 }, 00:23:47.667 "memory_domains": [ 00:23:47.667 { 00:23:47.667 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:47.667 "dma_device_type": 0 00:23:47.667 } 00:23:47.667 ], 00:23:47.667 "driver_specific": { 00:23:47.667 "nvme": [ 00:23:47.667 { 00:23:47.667 "trid": { 00:23:47.667 "trtype": "RDMA", 00:23:47.667 "adrfam": "IPv4", 00:23:47.667 "traddr": "192.168.100.8", 00:23:47.667 "trsvcid": "4421", 00:23:47.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:47.667 }, 00:23:47.667 "ctrlr_data": { 00:23:47.667 "cntlid": 3, 00:23:47.667 "vendor_id": "0x8086", 00:23:47.667 "model_number": "SPDK bdev Controller", 00:23:47.667 "serial_number": "00000000000000000000", 00:23:47.667 "firmware_revision": "24.01.1", 00:23:47.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.667 "oacs": { 00:23:47.667 "security": 0, 00:23:47.667 "format": 0, 00:23:47.667 "firmware": 0, 00:23:47.667 "ns_manage": 0 00:23:47.667 }, 00:23:47.667 "multi_ctrlr": true, 00:23:47.667 "ana_reporting": false 00:23:47.667 }, 00:23:47.667 "vs": { 00:23:47.667 "nvme_version": "1.3" 00:23:47.667 }, 00:23:47.667 "ns_data": { 00:23:47.667 "id": 1, 00:23:47.667 "can_share": true 00:23:47.667 } 00:23:47.667 } 00:23:47.668 ], 00:23:47.668 "mp_policy": "active_passive" 00:23:47.668 } 00:23:47.668 } 00:23:47.668 ] 00:23:47.668 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.668 17:33:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.668 17:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.668 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.668 17:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.668 17:33:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.d1ebU88DHw 00:23:47.668 17:33:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:47.668 17:33:07 -- host/async_init.sh@78 -- # nvmftestfini 00:23:47.668 17:33:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:47.668 17:33:07 -- nvmf/common.sh@116 -- # sync 00:23:47.668 17:33:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:47.668 17:33:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:47.668 17:33:07 -- nvmf/common.sh@119 -- # set +e 00:23:47.668 17:33:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:47.668 17:33:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:47.668 rmmod nvme_rdma 00:23:47.668 rmmod nvme_fabrics 00:23:47.668 17:33:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:47.668 17:33:07 -- nvmf/common.sh@123 -- # set -e 00:23:47.668 17:33:07 -- nvmf/common.sh@124 -- # return 0 00:23:47.668 17:33:07 -- nvmf/common.sh@477 -- # '[' -n 2784365 ']' 00:23:47.668 17:33:07 -- nvmf/common.sh@478 -- # killprocess 2784365 00:23:47.668 17:33:07 -- common/autotest_common.sh@936 -- # '[' -z 2784365 ']' 00:23:47.668 17:33:07 -- common/autotest_common.sh@940 -- # kill -0 2784365 00:23:47.668 17:33:07 -- common/autotest_common.sh@941 -- # uname 00:23:47.668 17:33:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:47.668 17:33:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2784365 00:23:47.668 17:33:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:47.668 17:33:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:47.668 17:33:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2784365' 00:23:47.668 killing process with pid 2784365 00:23:47.668 17:33:07 -- common/autotest_common.sh@955 -- # kill 2784365 00:23:47.668 17:33:07 -- common/autotest_common.sh@960 -- # wait 2784365 00:23:47.927 17:33:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:47.927 17:33:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:47.927 00:23:47.927 real 0m8.448s 00:23:47.927 user 0m3.635s 00:23:47.927 sys 0m5.446s 00:23:47.927 17:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:47.927 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.927 ************************************ 00:23:47.927 END TEST nvmf_async_init 00:23:47.927 ************************************ 00:23:47.927 17:33:07 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:47.927 17:33:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:47.927 17:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:47.927 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:47.927 ************************************ 00:23:47.927 START TEST dma 00:23:47.927 ************************************ 00:23:47.927 17:33:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:48.187 * Looking for test storage... 00:23:48.187 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:48.187 17:33:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:48.187 17:33:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:48.187 17:33:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:48.187 17:33:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:48.187 17:33:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:48.187 17:33:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:48.187 17:33:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:48.187 17:33:07 -- scripts/common.sh@335 -- # IFS=.-: 00:23:48.187 17:33:07 -- scripts/common.sh@335 -- # read -ra ver1 00:23:48.187 17:33:07 -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.187 17:33:07 -- scripts/common.sh@336 -- # read -ra ver2 00:23:48.187 17:33:07 -- scripts/common.sh@337 -- # local 'op=<' 00:23:48.187 17:33:07 -- scripts/common.sh@339 -- # ver1_l=2 00:23:48.187 17:33:07 -- scripts/common.sh@340 -- # ver2_l=1 00:23:48.187 17:33:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:48.187 17:33:07 -- scripts/common.sh@343 -- # case "$op" in 00:23:48.187 17:33:07 -- scripts/common.sh@344 -- # : 1 00:23:48.187 17:33:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:48.187 17:33:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.187 17:33:07 -- scripts/common.sh@364 -- # decimal 1 00:23:48.187 17:33:07 -- scripts/common.sh@352 -- # local d=1 00:23:48.187 17:33:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.187 17:33:07 -- scripts/common.sh@354 -- # echo 1 00:23:48.187 17:33:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:48.187 17:33:07 -- scripts/common.sh@365 -- # decimal 2 00:23:48.187 17:33:07 -- scripts/common.sh@352 -- # local d=2 00:23:48.187 17:33:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.187 17:33:07 -- scripts/common.sh@354 -- # echo 2 00:23:48.187 17:33:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:48.187 17:33:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:48.187 17:33:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:48.187 17:33:07 -- scripts/common.sh@367 -- # return 0 00:23:48.187 17:33:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.187 17:33:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:48.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.187 --rc genhtml_branch_coverage=1 00:23:48.187 --rc genhtml_function_coverage=1 00:23:48.187 --rc genhtml_legend=1 00:23:48.187 --rc geninfo_all_blocks=1 00:23:48.187 --rc geninfo_unexecuted_blocks=1 00:23:48.187 00:23:48.187 ' 00:23:48.187 17:33:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:48.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.187 --rc genhtml_branch_coverage=1 00:23:48.187 --rc genhtml_function_coverage=1 00:23:48.187 --rc genhtml_legend=1 00:23:48.187 --rc geninfo_all_blocks=1 00:23:48.187 --rc geninfo_unexecuted_blocks=1 00:23:48.187 00:23:48.187 ' 00:23:48.187 17:33:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:48.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.187 --rc genhtml_branch_coverage=1 00:23:48.188 --rc genhtml_function_coverage=1 00:23:48.188 --rc genhtml_legend=1 00:23:48.188 --rc geninfo_all_blocks=1 00:23:48.188 --rc geninfo_unexecuted_blocks=1 00:23:48.188 00:23:48.188 ' 00:23:48.188 17:33:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:48.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.188 --rc genhtml_branch_coverage=1 00:23:48.188 --rc genhtml_function_coverage=1 00:23:48.188 --rc genhtml_legend=1 00:23:48.188 --rc geninfo_all_blocks=1 00:23:48.188 --rc geninfo_unexecuted_blocks=1 00:23:48.188 00:23:48.188 ' 00:23:48.188 17:33:07 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.188 17:33:07 -- nvmf/common.sh@7 -- # uname -s 00:23:48.188 17:33:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.188 17:33:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.188 17:33:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.188 17:33:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.188 17:33:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.188 17:33:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.188 17:33:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.188 17:33:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.188 17:33:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.188 17:33:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.188 17:33:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:48.188 17:33:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:48.188 17:33:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.188 17:33:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.188 17:33:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.188 17:33:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:48.188 17:33:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.188 17:33:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.188 17:33:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.188 17:33:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.188 17:33:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.188 17:33:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.188 17:33:07 -- paths/export.sh@5 -- # export PATH 00:23:48.188 17:33:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.188 17:33:07 -- nvmf/common.sh@46 -- # : 0 00:23:48.188 17:33:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:48.188 17:33:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:48.188 17:33:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:48.188 17:33:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.188 17:33:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.188 17:33:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:48.188 17:33:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:48.188 17:33:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:48.188 17:33:07 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:23:48.188 17:33:07 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:23:48.188 17:33:07 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:23:48.188 17:33:07 -- host/dma.sh@18 -- # subsystem=0 00:23:48.188 17:33:07 -- host/dma.sh@93 -- # nvmftestinit 00:23:48.188 17:33:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:48.188 17:33:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.188 17:33:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:48.188 17:33:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:48.188 17:33:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:48.188 17:33:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.188 17:33:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.188 17:33:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.188 17:33:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:48.188 17:33:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:48.188 17:33:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:48.188 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:23:54.758 17:33:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:54.758 17:33:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:54.758 17:33:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:54.758 17:33:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:54.758 17:33:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:54.758 17:33:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:54.758 17:33:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:54.758 17:33:14 -- nvmf/common.sh@294 -- # net_devs=() 00:23:54.758 17:33:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:54.758 17:33:14 -- nvmf/common.sh@295 -- # e810=() 00:23:54.758 17:33:14 -- nvmf/common.sh@295 -- # local -ga e810 00:23:54.758 17:33:14 -- nvmf/common.sh@296 -- # x722=() 00:23:54.758 17:33:14 -- nvmf/common.sh@296 -- # local -ga x722 00:23:54.758 17:33:14 -- nvmf/common.sh@297 -- # mlx=() 00:23:54.758 17:33:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:54.758 17:33:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.758 17:33:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:54.758 17:33:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:54.758 17:33:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:54.759 17:33:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:54.759 17:33:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:54.759 17:33:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:54.759 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:54.759 17:33:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.759 17:33:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:54.759 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:54.759 17:33:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.759 17:33:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.759 17:33:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.759 17:33:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:54.759 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.759 17:33:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.759 17:33:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.759 17:33:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:54.759 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.759 17:33:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:54.759 17:33:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:54.759 17:33:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:54.759 17:33:14 -- nvmf/common.sh@57 -- # uname 00:23:54.759 17:33:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:54.759 17:33:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:54.759 17:33:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:54.759 17:33:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:54.759 17:33:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:54.759 17:33:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:54.759 17:33:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:54.759 17:33:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:54.759 17:33:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:54.759 17:33:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:54.759 17:33:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:54.759 17:33:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.759 17:33:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:54.759 17:33:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:54.759 17:33:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.759 17:33:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@104 -- # continue 2 00:23:54.759 17:33:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@104 -- # continue 2 00:23:54.759 17:33:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:54.759 17:33:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:54.759 17:33:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:54.759 17:33:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:54.759 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.759 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:54.759 altname enp217s0f0np0 00:23:54.759 altname ens818f0np0 00:23:54.759 inet 192.168.100.8/24 scope global mlx_0_0 00:23:54.759 valid_lft forever preferred_lft forever 00:23:54.759 17:33:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:54.759 17:33:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:54.759 17:33:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:54.759 17:33:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:54.759 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.759 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:54.759 altname enp217s0f1np1 00:23:54.759 altname ens818f1np1 00:23:54.759 inet 192.168.100.9/24 scope global mlx_0_1 00:23:54.759 valid_lft forever preferred_lft forever 00:23:54.759 17:33:14 -- nvmf/common.sh@410 -- # return 0 00:23:54.759 17:33:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:54.759 17:33:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:54.759 17:33:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:54.759 17:33:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:54.759 17:33:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.759 17:33:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:54.759 17:33:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:54.759 17:33:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.759 17:33:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:54.759 17:33:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@104 -- # continue 2 00:23:54.759 17:33:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.759 17:33:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.759 17:33:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@104 -- # continue 2 00:23:54.759 17:33:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:54.759 17:33:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:54.759 17:33:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:54.759 17:33:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:54.759 17:33:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:54.760 17:33:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:54.760 192.168.100.9' 00:23:54.760 17:33:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:54.760 192.168.100.9' 00:23:54.760 17:33:14 -- nvmf/common.sh@445 -- # head -n 1 00:23:54.760 17:33:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:54.760 17:33:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:54.760 192.168.100.9' 00:23:54.760 17:33:14 -- nvmf/common.sh@446 -- # tail -n +2 00:23:54.760 17:33:14 -- nvmf/common.sh@446 -- # head -n 1 00:23:54.760 17:33:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:54.760 17:33:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:54.760 17:33:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:54.760 17:33:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:54.760 17:33:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:54.760 17:33:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:54.760 17:33:14 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:23:54.760 17:33:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:54.760 17:33:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.760 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 17:33:14 -- nvmf/common.sh@469 -- # nvmfpid=2787883 00:23:54.760 17:33:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:54.760 17:33:14 -- nvmf/common.sh@470 -- # waitforlisten 2787883 00:23:54.760 17:33:14 -- common/autotest_common.sh@829 -- # '[' -z 2787883 ']' 00:23:54.760 17:33:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.760 17:33:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.760 17:33:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.760 17:33:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.760 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:23:55.019 [2024-11-09 17:33:14.559509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:55.019 [2024-11-09 17:33:14.559566] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.019 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.019 [2024-11-09 17:33:14.629875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:55.019 [2024-11-09 17:33:14.705056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:55.019 [2024-11-09 17:33:14.705162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.019 [2024-11-09 17:33:14.705171] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.019 [2024-11-09 17:33:14.705180] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.019 [2024-11-09 17:33:14.705227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.019 [2024-11-09 17:33:14.705229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.958 17:33:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.958 17:33:15 -- common/autotest_common.sh@862 -- # return 0 00:23:55.958 17:33:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:55.958 17:33:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.958 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.958 17:33:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.958 17:33:15 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:55.958 17:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.958 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.958 [2024-11-09 17:33:15.450245] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dd8a60/0x1ddcf50) succeed. 00:23:55.958 [2024-11-09 17:33:15.459181] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dd9f60/0x1e1e5f0) succeed. 00:23:55.958 17:33:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.958 17:33:15 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:23:55.958 17:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.958 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.958 Malloc0 00:23:55.958 17:33:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.959 17:33:15 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:55.959 17:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.959 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.959 17:33:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.959 17:33:15 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:23:55.959 17:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.959 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.959 17:33:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.959 17:33:15 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:55.959 17:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.959 17:33:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.959 [2024-11-09 17:33:15.616599] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:55.959 17:33:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.959 17:33:15 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:23:55.959 17:33:15 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:23:55.959 17:33:15 -- nvmf/common.sh@520 -- # config=() 00:23:55.959 17:33:15 -- nvmf/common.sh@520 -- # local subsystem config 00:23:55.959 17:33:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.959 17:33:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.959 { 00:23:55.959 "params": { 00:23:55.959 "name": "Nvme$subsystem", 00:23:55.959 "trtype": "$TEST_TRANSPORT", 00:23:55.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.959 "adrfam": "ipv4", 00:23:55.959 "trsvcid": "$NVMF_PORT", 00:23:55.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.959 "hdgst": ${hdgst:-false}, 00:23:55.959 "ddgst": ${ddgst:-false} 00:23:55.959 }, 00:23:55.959 "method": "bdev_nvme_attach_controller" 00:23:55.959 } 00:23:55.959 EOF 00:23:55.959 )") 00:23:55.959 17:33:15 -- nvmf/common.sh@542 -- # cat 00:23:55.959 17:33:15 -- nvmf/common.sh@544 -- # jq . 00:23:55.959 17:33:15 -- nvmf/common.sh@545 -- # IFS=, 00:23:55.959 17:33:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:55.959 "params": { 00:23:55.959 "name": "Nvme0", 00:23:55.959 "trtype": "rdma", 00:23:55.959 "traddr": "192.168.100.8", 00:23:55.959 "adrfam": "ipv4", 00:23:55.959 "trsvcid": "4420", 00:23:55.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:55.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:55.959 "hdgst": false, 00:23:55.959 "ddgst": false 00:23:55.959 }, 00:23:55.959 "method": "bdev_nvme_attach_controller" 00:23:55.959 }' 00:23:55.959 [2024-11-09 17:33:15.665874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:55.959 [2024-11-09 17:33:15.665919] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788132 ] 00:23:55.959 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.218 [2024-11-09 17:33:15.731484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.218 [2024-11-09 17:33:15.799726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.218 [2024-11-09 17:33:15.799729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.495 bdev Nvme0n1 reports 1 memory domains 00:24:01.495 bdev Nvme0n1 supports RDMA memory domain 00:24:01.495 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:01.495 ========================================================================== 00:24:01.495 Latency [us] 00:24:01.495 IOPS MiB/s Average min max 00:24:01.495 Core 2: 21942.09 85.71 728.42 231.15 8233.10 00:24:01.495 Core 3: 22115.26 86.39 722.71 237.74 8303.41 00:24:01.495 ========================================================================== 00:24:01.495 Total : 44057.36 172.10 725.55 231.15 8303.41 00:24:01.495 00:24:01.495 Total operations: 220330, translate 220330 pull_push 0 memzero 0 00:24:01.495 17:33:21 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:01.495 17:33:21 -- host/dma.sh@107 -- # gen_malloc_json 00:24:01.495 17:33:21 -- host/dma.sh@21 -- # jq . 00:24:01.495 [2024-11-09 17:33:21.258579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:01.495 [2024-11-09 17:33:21.258632] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789125 ] 00:24:01.755 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.755 [2024-11-09 17:33:21.323658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.755 [2024-11-09 17:33:21.390692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.755 [2024-11-09 17:33:21.390694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.030 bdev Malloc0 reports 1 memory domains 00:24:07.030 bdev Malloc0 doesn't support RDMA memory domain 00:24:07.030 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:07.030 ========================================================================== 00:24:07.030 Latency [us] 00:24:07.030 IOPS MiB/s Average min max 00:24:07.030 Core 2: 14878.93 58.12 1074.62 404.00 1370.34 00:24:07.030 Core 3: 15175.82 59.28 1053.59 370.20 1801.80 00:24:07.030 ========================================================================== 00:24:07.030 Total : 30054.75 117.40 1064.00 370.20 1801.80 00:24:07.030 00:24:07.030 Total operations: 150329, translate 0 pull_push 601316 memzero 0 00:24:07.030 17:33:26 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:07.030 17:33:26 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:07.030 17:33:26 -- host/dma.sh@48 -- # local subsystem=0 00:24:07.030 17:33:26 -- host/dma.sh@50 -- # jq . 00:24:07.030 Ignoring -M option 00:24:07.030 [2024-11-09 17:33:26.758744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:07.030 [2024-11-09 17:33:26.758807] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790018 ] 00:24:07.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.290 [2024-11-09 17:33:26.824110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:07.290 [2024-11-09 17:33:26.891104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.290 [2024-11-09 17:33:26.891107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.549 [2024-11-09 17:33:27.096645] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:12.831 [2024-11-09 17:33:32.125278] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:12.831 bdev 0f02b303-fa8f-410e-b6af-dd50252d0f17 reports 1 memory domains 00:24:12.831 bdev 0f02b303-fa8f-410e-b6af-dd50252d0f17 supports RDMA memory domain 00:24:12.831 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:12.831 ========================================================================== 00:24:12.831 Latency [us] 00:24:12.831 IOPS MiB/s Average min max 00:24:12.831 Core 2: 74453.38 290.83 214.09 88.73 2997.61 00:24:12.831 Core 3: 71102.32 277.74 224.16 73.34 3038.25 00:24:12.831 ========================================================================== 00:24:12.831 Total : 145555.70 568.58 219.01 73.34 3038.25 00:24:12.831 00:24:12.831 Total operations: 727851, translate 0 pull_push 0 memzero 727851 00:24:12.831 17:33:32 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:12.831 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.831 [2024-11-09 17:33:32.451919] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:15.368 Initializing NVMe Controllers 00:24:15.368 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:15.368 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:15.368 Initialization complete. Launching workers. 00:24:15.368 ======================================================== 00:24:15.368 Latency(us) 00:24:15.368 Device Information : IOPS MiB/s Average min max 00:24:15.368 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7995.92 3990.54 10973.51 00:24:15.368 ======================================================== 00:24:15.368 Total : 2016.00 7.88 7995.92 3990.54 10973.51 00:24:15.368 00:24:15.368 17:33:34 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:15.368 17:33:34 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:15.368 17:33:34 -- host/dma.sh@48 -- # local subsystem=0 00:24:15.368 17:33:34 -- host/dma.sh@50 -- # jq . 00:24:15.368 [2024-11-09 17:33:34.796803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:15.368 [2024-11-09 17:33:34.796860] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791368 ] 00:24:15.368 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.368 [2024-11-09 17:33:34.861623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:15.368 [2024-11-09 17:33:34.930277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.368 [2024-11-09 17:33:34.930280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.628 [2024-11-09 17:33:35.145399] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:20.907 [2024-11-09 17:33:40.174884] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:20.907 bdev bada2923-49da-4fd9-a1ea-e6da0aca100a reports 1 memory domains 00:24:20.907 bdev bada2923-49da-4fd9-a1ea-e6da0aca100a supports RDMA memory domain 00:24:20.907 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:20.907 ========================================================================== 00:24:20.907 Latency [us] 00:24:20.907 IOPS MiB/s Average min max 00:24:20.907 Core 2: 19306.14 75.41 828.10 17.86 8444.79 00:24:20.907 Core 3: 19677.10 76.86 812.47 11.60 8523.11 00:24:20.907 ========================================================================== 00:24:20.907 Total : 38983.24 152.28 820.21 11.60 8523.11 00:24:20.907 00:24:20.907 Total operations: 194939, translate 194830 pull_push 0 memzero 109 00:24:20.907 17:33:40 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:20.907 17:33:40 -- host/dma.sh@120 -- # nvmftestfini 00:24:20.907 17:33:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:20.907 17:33:40 -- nvmf/common.sh@116 -- # sync 00:24:20.907 17:33:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:20.907 17:33:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:20.907 17:33:40 -- nvmf/common.sh@119 -- # set +e 00:24:20.907 17:33:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:20.907 17:33:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:20.907 rmmod nvme_rdma 00:24:20.907 rmmod nvme_fabrics 00:24:20.907 17:33:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:20.907 17:33:40 -- nvmf/common.sh@123 -- # set -e 00:24:20.907 17:33:40 -- nvmf/common.sh@124 -- # return 0 00:24:20.907 17:33:40 -- nvmf/common.sh@477 -- # '[' -n 2787883 ']' 00:24:20.907 17:33:40 -- nvmf/common.sh@478 -- # killprocess 2787883 00:24:20.907 17:33:40 -- common/autotest_common.sh@936 -- # '[' -z 2787883 ']' 00:24:20.907 17:33:40 -- common/autotest_common.sh@940 -- # kill -0 2787883 00:24:20.907 17:33:40 -- common/autotest_common.sh@941 -- # uname 00:24:20.907 17:33:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:20.907 17:33:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2787883 00:24:20.907 17:33:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:20.907 17:33:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:20.907 17:33:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2787883' 00:24:20.907 killing process with pid 2787883 00:24:20.907 17:33:40 -- common/autotest_common.sh@955 -- # kill 2787883 00:24:20.907 17:33:40 -- common/autotest_common.sh@960 -- # wait 2787883 00:24:21.216 17:33:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:21.216 17:33:40 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:21.216 00:24:21.216 real 0m33.184s 00:24:21.216 user 1m37.115s 00:24:21.216 sys 0m6.264s 00:24:21.216 17:33:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:21.216 17:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:21.216 ************************************ 00:24:21.216 END TEST dma 00:24:21.216 ************************************ 00:24:21.216 17:33:40 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:21.216 17:33:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:21.216 17:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:21.216 17:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:21.216 ************************************ 00:24:21.216 START TEST nvmf_identify 00:24:21.216 ************************************ 00:24:21.216 17:33:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:21.515 * Looking for test storage... 00:24:21.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:21.515 17:33:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:21.515 17:33:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:21.515 17:33:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:21.515 17:33:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:21.515 17:33:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:21.515 17:33:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:21.515 17:33:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:21.515 17:33:41 -- scripts/common.sh@335 -- # IFS=.-: 00:24:21.515 17:33:41 -- scripts/common.sh@335 -- # read -ra ver1 00:24:21.515 17:33:41 -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.515 17:33:41 -- scripts/common.sh@336 -- # read -ra ver2 00:24:21.515 17:33:41 -- scripts/common.sh@337 -- # local 'op=<' 00:24:21.515 17:33:41 -- scripts/common.sh@339 -- # ver1_l=2 00:24:21.515 17:33:41 -- scripts/common.sh@340 -- # ver2_l=1 00:24:21.515 17:33:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:21.515 17:33:41 -- scripts/common.sh@343 -- # case "$op" in 00:24:21.515 17:33:41 -- scripts/common.sh@344 -- # : 1 00:24:21.515 17:33:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:21.515 17:33:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.515 17:33:41 -- scripts/common.sh@364 -- # decimal 1 00:24:21.515 17:33:41 -- scripts/common.sh@352 -- # local d=1 00:24:21.515 17:33:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.515 17:33:41 -- scripts/common.sh@354 -- # echo 1 00:24:21.515 17:33:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:21.515 17:33:41 -- scripts/common.sh@365 -- # decimal 2 00:24:21.515 17:33:41 -- scripts/common.sh@352 -- # local d=2 00:24:21.515 17:33:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.515 17:33:41 -- scripts/common.sh@354 -- # echo 2 00:24:21.515 17:33:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:21.515 17:33:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:21.515 17:33:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:21.515 17:33:41 -- scripts/common.sh@367 -- # return 0 00:24:21.515 17:33:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.515 17:33:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:21.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.515 --rc genhtml_branch_coverage=1 00:24:21.515 --rc genhtml_function_coverage=1 00:24:21.515 --rc genhtml_legend=1 00:24:21.515 --rc geninfo_all_blocks=1 00:24:21.515 --rc geninfo_unexecuted_blocks=1 00:24:21.515 00:24:21.515 ' 00:24:21.515 17:33:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:21.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.515 --rc genhtml_branch_coverage=1 00:24:21.515 --rc genhtml_function_coverage=1 00:24:21.515 --rc genhtml_legend=1 00:24:21.515 --rc geninfo_all_blocks=1 00:24:21.515 --rc geninfo_unexecuted_blocks=1 00:24:21.515 00:24:21.515 ' 00:24:21.515 17:33:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:21.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.515 --rc genhtml_branch_coverage=1 00:24:21.515 --rc genhtml_function_coverage=1 00:24:21.515 --rc genhtml_legend=1 00:24:21.515 --rc geninfo_all_blocks=1 00:24:21.515 --rc geninfo_unexecuted_blocks=1 00:24:21.515 00:24:21.515 ' 00:24:21.515 17:33:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:21.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.515 --rc genhtml_branch_coverage=1 00:24:21.516 --rc genhtml_function_coverage=1 00:24:21.516 --rc genhtml_legend=1 00:24:21.516 --rc geninfo_all_blocks=1 00:24:21.516 --rc geninfo_unexecuted_blocks=1 00:24:21.516 00:24:21.516 ' 00:24:21.516 17:33:41 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.516 17:33:41 -- nvmf/common.sh@7 -- # uname -s 00:24:21.516 17:33:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.516 17:33:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.516 17:33:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.516 17:33:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.516 17:33:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.516 17:33:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.516 17:33:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.516 17:33:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.516 17:33:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.516 17:33:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.516 17:33:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:21.516 17:33:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:21.516 17:33:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.516 17:33:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.516 17:33:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.516 17:33:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:21.516 17:33:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.516 17:33:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.516 17:33:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.516 17:33:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.516 17:33:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.516 17:33:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.516 17:33:41 -- paths/export.sh@5 -- # export PATH 00:24:21.516 17:33:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.516 17:33:41 -- nvmf/common.sh@46 -- # : 0 00:24:21.516 17:33:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:21.516 17:33:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:21.516 17:33:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:21.516 17:33:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.516 17:33:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.516 17:33:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:21.516 17:33:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:21.516 17:33:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:21.516 17:33:41 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.516 17:33:41 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.516 17:33:41 -- host/identify.sh@14 -- # nvmftestinit 00:24:21.516 17:33:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:21.516 17:33:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.516 17:33:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:21.516 17:33:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:21.516 17:33:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:21.516 17:33:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.516 17:33:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.516 17:33:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.516 17:33:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:21.516 17:33:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:21.516 17:33:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:21.516 17:33:41 -- common/autotest_common.sh@10 -- # set +x 00:24:28.095 17:33:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.095 17:33:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:28.095 17:33:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:28.095 17:33:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:28.095 17:33:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:28.095 17:33:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:28.095 17:33:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:28.095 17:33:47 -- nvmf/common.sh@294 -- # net_devs=() 00:24:28.095 17:33:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:28.095 17:33:47 -- nvmf/common.sh@295 -- # e810=() 00:24:28.095 17:33:47 -- nvmf/common.sh@295 -- # local -ga e810 00:24:28.095 17:33:47 -- nvmf/common.sh@296 -- # x722=() 00:24:28.095 17:33:47 -- nvmf/common.sh@296 -- # local -ga x722 00:24:28.095 17:33:47 -- nvmf/common.sh@297 -- # mlx=() 00:24:28.095 17:33:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:28.095 17:33:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.095 17:33:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:28.095 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:28.095 17:33:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.095 17:33:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:28.095 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:28.095 17:33:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:28.095 17:33:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.095 17:33:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.095 17:33:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:28.095 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:28.095 17:33:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.095 17:33:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.095 17:33:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:28.095 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:28.095 17:33:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.095 17:33:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:28.095 17:33:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:28.095 17:33:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:28.095 17:33:47 -- nvmf/common.sh@57 -- # uname 00:24:28.095 17:33:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:28.095 17:33:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:28.095 17:33:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:28.095 17:33:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:28.095 17:33:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:28.095 17:33:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:28.095 17:33:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:28.095 17:33:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:28.095 17:33:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:28.095 17:33:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:28.095 17:33:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:28.095 17:33:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.095 17:33:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:28.095 17:33:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:28.095 17:33:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.095 17:33:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:28.095 17:33:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:28.095 17:33:47 -- nvmf/common.sh@104 -- # continue 2 00:24:28.095 17:33:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.095 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:28.095 17:33:47 -- nvmf/common.sh@104 -- # continue 2 00:24:28.095 17:33:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:28.095 17:33:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:28.095 17:33:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.095 17:33:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:28.095 17:33:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:28.095 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.095 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:28.095 altname enp217s0f0np0 00:24:28.095 altname ens818f0np0 00:24:28.095 inet 192.168.100.8/24 scope global mlx_0_0 00:24:28.095 valid_lft forever preferred_lft forever 00:24:28.095 17:33:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:28.095 17:33:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:28.095 17:33:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.095 17:33:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.095 17:33:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:28.095 17:33:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:28.095 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:28.095 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:28.095 altname enp217s0f1np1 00:24:28.095 altname ens818f1np1 00:24:28.095 inet 192.168.100.9/24 scope global mlx_0_1 00:24:28.095 valid_lft forever preferred_lft forever 00:24:28.095 17:33:47 -- nvmf/common.sh@410 -- # return 0 00:24:28.095 17:33:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.095 17:33:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:28.095 17:33:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:28.095 17:33:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:28.096 17:33:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:28.096 17:33:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:28.096 17:33:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:28.096 17:33:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:28.096 17:33:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:28.096 17:33:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:28.096 17:33:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.096 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.096 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:28.096 17:33:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:28.096 17:33:47 -- nvmf/common.sh@104 -- # continue 2 00:24:28.096 17:33:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:28.096 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.354 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:28.354 17:33:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:28.354 17:33:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:28.354 17:33:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:28.354 17:33:47 -- nvmf/common.sh@104 -- # continue 2 00:24:28.354 17:33:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:28.354 17:33:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:28.354 17:33:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.354 17:33:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:28.354 17:33:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:28.354 17:33:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:28.354 17:33:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:28.354 17:33:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:28.354 192.168.100.9' 00:24:28.354 17:33:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:28.354 192.168.100.9' 00:24:28.354 17:33:47 -- nvmf/common.sh@445 -- # head -n 1 00:24:28.354 17:33:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:28.354 17:33:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:28.354 192.168.100.9' 00:24:28.354 17:33:47 -- nvmf/common.sh@446 -- # tail -n +2 00:24:28.354 17:33:47 -- nvmf/common.sh@446 -- # head -n 1 00:24:28.354 17:33:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:28.354 17:33:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:28.354 17:33:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:28.354 17:33:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:28.354 17:33:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:28.354 17:33:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:28.355 17:33:47 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:28.355 17:33:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.355 17:33:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.355 17:33:47 -- host/identify.sh@19 -- # nvmfpid=2795633 00:24:28.355 17:33:47 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.355 17:33:47 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.355 17:33:47 -- host/identify.sh@23 -- # waitforlisten 2795633 00:24:28.355 17:33:47 -- common/autotest_common.sh@829 -- # '[' -z 2795633 ']' 00:24:28.355 17:33:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.355 17:33:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.355 17:33:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.355 17:33:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.355 17:33:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.355 [2024-11-09 17:33:47.985822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.355 [2024-11-09 17:33:47.985871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.355 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.355 [2024-11-09 17:33:48.056744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.613 [2024-11-09 17:33:48.131968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.613 [2024-11-09 17:33:48.132075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.613 [2024-11-09 17:33:48.132085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.613 [2024-11-09 17:33:48.132094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.613 [2024-11-09 17:33:48.132135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.613 [2024-11-09 17:33:48.132228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.613 [2024-11-09 17:33:48.132311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.613 [2024-11-09 17:33:48.132313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.180 17:33:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.180 17:33:48 -- common/autotest_common.sh@862 -- # return 0 00:24:29.180 17:33:48 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:29.180 17:33:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.180 17:33:48 -- common/autotest_common.sh@10 -- # set +x 00:24:29.180 [2024-11-09 17:33:48.838323] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11e4090/0x11e8580) succeed. 00:24:29.180 [2024-11-09 17:33:48.847495] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11e5680/0x1229c20) succeed. 00:24:29.444 17:33:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:48 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:29.444 17:33:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.444 17:33:48 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 17:33:49 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 Malloc0 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 [2024-11-09 17:33:49.057737] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:29.444 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.444 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.444 [2024-11-09 17:33:49.073372] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:29.444 [ 00:24:29.444 { 00:24:29.444 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:29.444 "subtype": "Discovery", 00:24:29.444 "listen_addresses": [ 00:24:29.444 { 00:24:29.444 "transport": "RDMA", 00:24:29.444 "trtype": "RDMA", 00:24:29.444 "adrfam": "IPv4", 00:24:29.444 "traddr": "192.168.100.8", 00:24:29.444 "trsvcid": "4420" 00:24:29.444 } 00:24:29.444 ], 00:24:29.444 "allow_any_host": true, 00:24:29.444 "hosts": [] 00:24:29.444 }, 00:24:29.444 { 00:24:29.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.444 "subtype": "NVMe", 00:24:29.444 "listen_addresses": [ 00:24:29.444 { 00:24:29.444 "transport": "RDMA", 00:24:29.444 "trtype": "RDMA", 00:24:29.444 "adrfam": "IPv4", 00:24:29.444 "traddr": "192.168.100.8", 00:24:29.444 "trsvcid": "4420" 00:24:29.444 } 00:24:29.444 ], 00:24:29.444 "allow_any_host": true, 00:24:29.444 "hosts": [], 00:24:29.444 "serial_number": "SPDK00000000000001", 00:24:29.444 "model_number": "SPDK bdev Controller", 00:24:29.444 "max_namespaces": 32, 00:24:29.444 "min_cntlid": 1, 00:24:29.444 "max_cntlid": 65519, 00:24:29.444 "namespaces": [ 00:24:29.444 { 00:24:29.444 "nsid": 1, 00:24:29.444 "bdev_name": "Malloc0", 00:24:29.444 "name": "Malloc0", 00:24:29.444 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:29.444 "eui64": "ABCDEF0123456789", 00:24:29.444 "uuid": "4cf13ffa-8343-4af8-aeb1-650f546f557a" 00:24:29.444 } 00:24:29.444 ] 00:24:29.444 } 00:24:29.444 ] 00:24:29.444 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.444 17:33:49 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:29.444 [2024-11-09 17:33:49.117325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:29.444 [2024-11-09 17:33:49.117364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795920 ] 00:24:29.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.444 [2024-11-09 17:33:49.164735] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:29.444 [2024-11-09 17:33:49.164809] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:29.444 [2024-11-09 17:33:49.164827] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:29.444 [2024-11-09 17:33:49.164832] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:29.444 [2024-11-09 17:33:49.164862] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:29.444 [2024-11-09 17:33:49.175971] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:29.444 [2024-11-09 17:33:49.190042] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:29.444 [2024-11-09 17:33:49.190053] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:29.444 [2024-11-09 17:33:49.190061] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190068] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190074] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190081] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190087] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190093] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190099] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190105] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190111] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190117] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190123] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190129] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190135] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190141] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190147] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190153] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190159] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190172] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190178] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190184] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190190] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190196] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190202] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190208] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190214] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190220] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190226] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190232] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190240] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190247] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.444 [2024-11-09 17:33:49.190252] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:29.445 [2024-11-09 17:33:49.190258] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:29.445 [2024-11-09 17:33:49.190262] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:29.445 [2024-11-09 17:33:49.190280] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.190293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:29.445 [2024-11-09 17:33:49.195459] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195477] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195485] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:29.445 [2024-11-09 17:33:49.195492] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195499] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195515] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195544] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195557] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195563] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195570] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195578] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195608] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195621] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195627] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195641] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195672] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195684] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195690] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195699] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195724] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195735] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:29.445 [2024-11-09 17:33:49.195741] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195747] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195861] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:29.445 [2024-11-09 17:33:49.195867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195877] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195901] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195913] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:29.445 [2024-11-09 17:33:49.195918] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195927] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.195955] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.195961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.195967] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:29.445 [2024-11-09 17:33:49.195973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.195979] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.195986] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:29.445 [2024-11-09 17:33:49.195996] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.196006] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:29.445 [2024-11-09 17:33:49.196050] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.196055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.196064] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:29.445 [2024-11-09 17:33:49.196070] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:29.445 [2024-11-09 17:33:49.196076] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:29.445 [2024-11-09 17:33:49.196082] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:29.445 [2024-11-09 17:33:49.196088] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:29.445 [2024-11-09 17:33:49.196094] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.196100] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.196118] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.196153] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.445 [2024-11-09 17:33:49.196158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.445 [2024-11-09 17:33:49.196167] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.445 [2024-11-09 17:33:49.196181] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.445 [2024-11-09 17:33:49.196195] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.445 [2024-11-09 17:33:49.196209] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.445 [2024-11-09 17:33:49.196221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.196229] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196240] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:29.445 [2024-11-09 17:33:49.196247] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.445 [2024-11-09 17:33:49.196255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.445 [2024-11-09 17:33:49.196270] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196283] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:29.446 [2024-11-09 17:33:49.196289] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:29.446 [2024-11-09 17:33:49.196295] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196304] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:29.446 [2024-11-09 17:33:49.196337] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196350] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196360] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:29.446 [2024-11-09 17:33:49.196382] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183d00 00:24:29.446 [2024-11-09 17:33:49.196398] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.446 [2024-11-09 17:33:49.196422] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196440] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:24:29.446 [2024-11-09 17:33:49.196645] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196653] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196665] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196689] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:24:29.446 [2024-11-09 17:33:49.196702] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.446 [2024-11-09 17:33:49.196722] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.446 [2024-11-09 17:33:49.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.446 [2024-11-09 17:33:49.196739] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.446 ===================================================== 00:24:29.446 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:29.446 ===================================================== 00:24:29.446 Controller Capabilities/Features 00:24:29.446 ================================ 00:24:29.446 Vendor ID: 0000 00:24:29.446 Subsystem Vendor ID: 0000 00:24:29.446 Serial Number: .................... 00:24:29.446 Model Number: ........................................ 00:24:29.446 Firmware Version: 24.01.1 00:24:29.446 Recommended Arb Burst: 0 00:24:29.446 IEEE OUI Identifier: 00 00 00 00:24:29.446 Multi-path I/O 00:24:29.446 May have multiple subsystem ports: No 00:24:29.446 May have multiple controllers: No 00:24:29.446 Associated with SR-IOV VF: No 00:24:29.446 Max Data Transfer Size: 131072 00:24:29.446 Max Number of Namespaces: 0 00:24:29.446 Max Number of I/O Queues: 1024 00:24:29.446 NVMe Specification Version (VS): 1.3 00:24:29.446 NVMe Specification Version (Identify): 1.3 00:24:29.446 Maximum Queue Entries: 128 00:24:29.446 Contiguous Queues Required: Yes 00:24:29.446 Arbitration Mechanisms Supported 00:24:29.446 Weighted Round Robin: Not Supported 00:24:29.446 Vendor Specific: Not Supported 00:24:29.446 Reset Timeout: 15000 ms 00:24:29.446 Doorbell Stride: 4 bytes 00:24:29.446 NVM Subsystem Reset: Not Supported 00:24:29.446 Command Sets Supported 00:24:29.446 NVM Command Set: Supported 00:24:29.446 Boot Partition: Not Supported 00:24:29.446 Memory Page Size Minimum: 4096 bytes 00:24:29.446 Memory Page Size Maximum: 4096 bytes 00:24:29.446 Persistent Memory Region: Not Supported 00:24:29.446 Optional Asynchronous Events Supported 00:24:29.446 Namespace Attribute Notices: Not Supported 00:24:29.446 Firmware Activation Notices: Not Supported 00:24:29.446 ANA Change Notices: Not Supported 00:24:29.446 PLE Aggregate Log Change Notices: Not Supported 00:24:29.446 LBA Status Info Alert Notices: Not Supported 00:24:29.446 EGE Aggregate Log Change Notices: Not Supported 00:24:29.446 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.446 Zone Descriptor Change Notices: Not Supported 00:24:29.446 Discovery Log Change Notices: Supported 00:24:29.446 Controller Attributes 00:24:29.446 128-bit Host Identifier: Not Supported 00:24:29.446 Non-Operational Permissive Mode: Not Supported 00:24:29.446 NVM Sets: Not Supported 00:24:29.446 Read Recovery Levels: Not Supported 00:24:29.446 Endurance Groups: Not Supported 00:24:29.446 Predictable Latency Mode: Not Supported 00:24:29.446 Traffic Based Keep ALive: Not Supported 00:24:29.446 Namespace Granularity: Not Supported 00:24:29.446 SQ Associations: Not Supported 00:24:29.446 UUID List: Not Supported 00:24:29.446 Multi-Domain Subsystem: Not Supported 00:24:29.446 Fixed Capacity Management: Not Supported 00:24:29.446 Variable Capacity Management: Not Supported 00:24:29.446 Delete Endurance Group: Not Supported 00:24:29.446 Delete NVM Set: Not Supported 00:24:29.446 Extended LBA Formats Supported: Not Supported 00:24:29.446 Flexible Data Placement Supported: Not Supported 00:24:29.446 00:24:29.446 Controller Memory Buffer Support 00:24:29.446 ================================ 00:24:29.446 Supported: No 00:24:29.446 00:24:29.446 Persistent Memory Region Support 00:24:29.446 ================================ 00:24:29.446 Supported: No 00:24:29.446 00:24:29.446 Admin Command Set Attributes 00:24:29.446 ============================ 00:24:29.446 Security Send/Receive: Not Supported 00:24:29.446 Format NVM: Not Supported 00:24:29.446 Firmware Activate/Download: Not Supported 00:24:29.446 Namespace Management: Not Supported 00:24:29.446 Device Self-Test: Not Supported 00:24:29.446 Directives: Not Supported 00:24:29.446 NVMe-MI: Not Supported 00:24:29.446 Virtualization Management: Not Supported 00:24:29.446 Doorbell Buffer Config: Not Supported 00:24:29.446 Get LBA Status Capability: Not Supported 00:24:29.446 Command & Feature Lockdown Capability: Not Supported 00:24:29.446 Abort Command Limit: 1 00:24:29.446 Async Event Request Limit: 4 00:24:29.446 Number of Firmware Slots: N/A 00:24:29.446 Firmware Slot 1 Read-Only: N/A 00:24:29.446 Firmware Activation Without Reset: N/A 00:24:29.446 Multiple Update Detection Support: N/A 00:24:29.446 Firmware Update Granularity: No Information Provided 00:24:29.446 Per-Namespace SMART Log: No 00:24:29.446 Asymmetric Namespace Access Log Page: Not Supported 00:24:29.446 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:29.446 Command Effects Log Page: Not Supported 00:24:29.446 Get Log Page Extended Data: Supported 00:24:29.446 Telemetry Log Pages: Not Supported 00:24:29.446 Persistent Event Log Pages: Not Supported 00:24:29.446 Supported Log Pages Log Page: May Support 00:24:29.446 Commands Supported & Effects Log Page: Not Supported 00:24:29.446 Feature Identifiers & Effects Log Page:May Support 00:24:29.446 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.446 Data Area 4 for Telemetry Log: Not Supported 00:24:29.446 Error Log Page Entries Supported: 128 00:24:29.446 Keep Alive: Not Supported 00:24:29.446 00:24:29.446 NVM Command Set Attributes 00:24:29.446 ========================== 00:24:29.446 Submission Queue Entry Size 00:24:29.446 Max: 1 00:24:29.446 Min: 1 00:24:29.446 Completion Queue Entry Size 00:24:29.446 Max: 1 00:24:29.446 Min: 1 00:24:29.446 Number of Namespaces: 0 00:24:29.446 Compare Command: Not Supported 00:24:29.446 Write Uncorrectable Command: Not Supported 00:24:29.446 Dataset Management Command: Not Supported 00:24:29.446 Write Zeroes Command: Not Supported 00:24:29.446 Set Features Save Field: Not Supported 00:24:29.446 Reservations: Not Supported 00:24:29.446 Timestamp: Not Supported 00:24:29.446 Copy: Not Supported 00:24:29.446 Volatile Write Cache: Not Present 00:24:29.446 Atomic Write Unit (Normal): 1 00:24:29.447 Atomic Write Unit (PFail): 1 00:24:29.447 Atomic Compare & Write Unit: 1 00:24:29.447 Fused Compare & Write: Supported 00:24:29.447 Scatter-Gather List 00:24:29.447 SGL Command Set: Supported 00:24:29.447 SGL Keyed: Supported 00:24:29.447 SGL Bit Bucket Descriptor: Not Supported 00:24:29.447 SGL Metadata Pointer: Not Supported 00:24:29.447 Oversized SGL: Not Supported 00:24:29.447 SGL Metadata Address: Not Supported 00:24:29.447 SGL Offset: Supported 00:24:29.447 Transport SGL Data Block: Not Supported 00:24:29.447 Replay Protected Memory Block: Not Supported 00:24:29.447 00:24:29.447 Firmware Slot Information 00:24:29.447 ========================= 00:24:29.447 Active slot: 0 00:24:29.447 00:24:29.447 00:24:29.447 Error Log 00:24:29.447 ========= 00:24:29.447 00:24:29.447 Active Namespaces 00:24:29.447 ================= 00:24:29.447 Discovery Log Page 00:24:29.447 ================== 00:24:29.447 Generation Counter: 2 00:24:29.447 Number of Records: 2 00:24:29.447 Record Format: 0 00:24:29.447 00:24:29.447 Discovery Log Entry 0 00:24:29.447 ---------------------- 00:24:29.447 Transport Type: 1 (RDMA) 00:24:29.447 Address Family: 1 (IPv4) 00:24:29.447 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:29.447 Entry Flags: 00:24:29.447 Duplicate Returned Information: 1 00:24:29.447 Explicit Persistent Connection Support for Discovery: 1 00:24:29.447 Transport Requirements: 00:24:29.447 Secure Channel: Not Required 00:24:29.447 Port ID: 0 (0x0000) 00:24:29.447 Controller ID: 65535 (0xffff) 00:24:29.447 Admin Max SQ Size: 128 00:24:29.447 Transport Service Identifier: 4420 00:24:29.447 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:29.447 Transport Address: 192.168.100.8 00:24:29.447 Transport Specific Address Subtype - RDMA 00:24:29.447 RDMA QP Service Type: 1 (Reliable Connected) 00:24:29.447 RDMA Provider Type: 1 (No provider specified) 00:24:29.447 RDMA CM Service: 1 (RDMA_CM) 00:24:29.447 Discovery Log Entry 1 00:24:29.447 ---------------------- 00:24:29.447 Transport Type: 1 (RDMA) 00:24:29.447 Address Family: 1 (IPv4) 00:24:29.447 Subsystem Type: 2 (NVM Subsystem) 00:24:29.447 Entry Flags: 00:24:29.447 Duplicate Returned Information: 0 00:24:29.447 Explicit Persistent Connection Support for Discovery: 0 00:24:29.447 Transport Requirements: 00:24:29.447 Secure Channel: Not Required 00:24:29.447 Port ID: 0 (0x0000) 00:24:29.447 Controller ID: 65535 (0xffff) 00:24:29.447 Admin Max SQ Size: [2024-11-09 17:33:49.196815] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:29.447 [2024-11-09 17:33:49.196827] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 679 doesn't match qid 00:24:29.447 [2024-11-09 17:33:49.196840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196847] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 679 doesn't match qid 00:24:29.447 [2024-11-09 17:33:49.196855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196861] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 679 doesn't match qid 00:24:29.447 [2024-11-09 17:33:49.196869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196875] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 679 doesn't match qid 00:24:29.447 [2024-11-09 17:33:49.196883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:1e28 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196892] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.196900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.196922] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.196928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196937] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.196945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.196951] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.196972] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.196977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.196984] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:29.447 [2024-11-09 17:33:49.196990] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:29.447 [2024-11-09 17:33:49.196997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197005] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197031] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197043] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197052] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197080] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197092] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197100] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197128] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197140] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197149] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197176] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197189] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197198] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197225] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197238] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197247] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197295] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197326] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197338] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197347] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.447 [2024-11-09 17:33:49.197370] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.447 [2024-11-09 17:33:49.197375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:29.447 [2024-11-09 17:33:49.197382] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197390] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.447 [2024-11-09 17:33:49.197398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197412] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197423] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197432] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197459] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197471] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197480] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197505] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197516] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197525] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197553] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197566] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197576] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197606] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197619] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197627] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197658] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197671] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197680] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197704] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197717] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197726] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197750] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197762] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197770] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197794] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197806] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197816] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197846] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197860] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197869] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197893] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197905] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197913] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197939] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197952] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197961] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.197969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.197984] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.197989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.197996] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198005] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.198026] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.198031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.198038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198047] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.198069] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.198075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:29.448 [2024-11-09 17:33:49.198081] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198090] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.448 [2024-11-09 17:33:49.198099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.448 [2024-11-09 17:33:49.198115] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.448 [2024-11-09 17:33:49.198121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198128] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198137] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198164] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198177] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198186] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198208] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198221] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198230] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198262] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198273] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198282] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198304] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198316] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198325] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198359] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198372] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198380] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198411] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198425] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198434] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198477] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198491] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198500] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198529] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198541] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198549] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198573] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198584] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198593] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198620] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198632] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198640] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198662] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198673] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198682] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198709] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198722] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198731] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198753] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198764] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198773] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198796] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198808] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198817] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198840] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198852] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198860] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198890] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198901] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198910] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198939] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198951] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198960] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.449 [2024-11-09 17:33:49.198967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.449 [2024-11-09 17:33:49.198986] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.449 [2024-11-09 17:33:49.198991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:29.449 [2024-11-09 17:33:49.198998] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199006] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199033] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199045] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199054] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199075] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199095] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199124] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199136] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199145] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199168] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199180] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199188] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199212] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199223] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199232] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199259] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199271] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199280] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199303] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199315] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199323] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199355] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199367] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199375] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199397] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199408] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199417] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.199424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.199438] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.199444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.199450] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.203465] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.203474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.450 [2024-11-09 17:33:49.203494] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.450 [2024-11-09 17:33:49.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:24:29.450 [2024-11-09 17:33:49.203506] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.450 [2024-11-09 17:33:49.203513] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:29.722 128 00:24:29.722 Transport Service Identifier: 4420 00:24:29.722 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:29.722 Transport Address: 192.168.100.8 00:24:29.722 Transport Specific Address Subtype - RDMA 00:24:29.722 RDMA QP Service Type: 1 (Reliable Connected) 00:24:29.722 RDMA Provider Type: 1 (No provider specified) 00:24:29.722 RDMA CM Service: 1 (RDMA_CM) 00:24:29.722 17:33:49 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:29.722 [2024-11-09 17:33:49.270625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:29.722 [2024-11-09 17:33:49.270663] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795922 ] 00:24:29.722 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.722 [2024-11-09 17:33:49.315622] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:29.722 [2024-11-09 17:33:49.315688] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:29.722 [2024-11-09 17:33:49.315713] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:29.722 [2024-11-09 17:33:49.315718] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:29.722 [2024-11-09 17:33:49.315741] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:29.722 [2024-11-09 17:33:49.326914] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:29.722 [2024-11-09 17:33:49.341576] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:29.722 [2024-11-09 17:33:49.341587] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:29.722 [2024-11-09 17:33:49.341594] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341601] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341607] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341613] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341619] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341625] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341631] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341637] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341643] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341649] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341655] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341662] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341668] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341674] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341682] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341689] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341695] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341701] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341707] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341713] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341719] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341725] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341731] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341737] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341743] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341749] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341755] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341761] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341767] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341773] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341779] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341785] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:29.722 [2024-11-09 17:33:49.341790] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:29.722 [2024-11-09 17:33:49.341795] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:29.722 [2024-11-09 17:33:49.341809] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.341821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:29.722 [2024-11-09 17:33:49.347459] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.722 [2024-11-09 17:33:49.347468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.722 [2024-11-09 17:33:49.347476] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347483] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:29.722 [2024-11-09 17:33:49.347489] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:29.722 [2024-11-09 17:33:49.347496] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:29.722 [2024-11-09 17:33:49.347507] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.722 [2024-11-09 17:33:49.347534] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.722 [2024-11-09 17:33:49.347542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:29.722 [2024-11-09 17:33:49.347548] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:29.722 [2024-11-09 17:33:49.347555] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347561] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:29.722 [2024-11-09 17:33:49.347569] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.722 [2024-11-09 17:33:49.347601] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.722 [2024-11-09 17:33:49.347606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:29.722 [2024-11-09 17:33:49.347613] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:29.722 [2024-11-09 17:33:49.347619] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:29.722 [2024-11-09 17:33:49.347633] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.722 [2024-11-09 17:33:49.347661] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.722 [2024-11-09 17:33:49.347666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.722 [2024-11-09 17:33:49.347673] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:29.722 [2024-11-09 17:33:49.347679] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347687] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.722 [2024-11-09 17:33:49.347713] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.722 [2024-11-09 17:33:49.347719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.722 [2024-11-09 17:33:49.347725] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:29.722 [2024-11-09 17:33:49.347731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:29.722 [2024-11-09 17:33:49.347737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:29.722 [2024-11-09 17:33:49.347850] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:29.722 [2024-11-09 17:33:49.347855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:29.722 [2024-11-09 17:33:49.347864] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.722 [2024-11-09 17:33:49.347873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.723 [2024-11-09 17:33:49.347895] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.347901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.347907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:29.723 [2024-11-09 17:33:49.347913] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.347921] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.347928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.723 [2024-11-09 17:33:49.347948] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.347954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.347960] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:29.723 [2024-11-09 17:33:49.347966] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.347972] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.347979] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:29.723 [2024-11-09 17:33:49.347989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.347998] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:29.723 [2024-11-09 17:33:49.348041] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348055] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:29.723 [2024-11-09 17:33:49.348061] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:29.723 [2024-11-09 17:33:49.348067] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:29.723 [2024-11-09 17:33:49.348072] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:29.723 [2024-11-09 17:33:49.348078] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:29.723 [2024-11-09 17:33:49.348083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348089] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348098] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348106] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.723 [2024-11-09 17:33:49.348129] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348143] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.723 [2024-11-09 17:33:49.348157] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.723 [2024-11-09 17:33:49.348171] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.723 [2024-11-09 17:33:49.348184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.723 [2024-11-09 17:33:49.348197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348203] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348213] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348220] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.723 [2024-11-09 17:33:49.348244] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348256] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:29.723 [2024-11-09 17:33:49.348262] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348268] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348292] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.723 [2024-11-09 17:33:49.348319] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348380] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348397] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:24:29.723 [2024-11-09 17:33:49.348432] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348450] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:29.723 [2024-11-09 17:33:49.348469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348475] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348483] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348491] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:29.723 [2024-11-09 17:33:49.348536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348553] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348560] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348568] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348576] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:29.723 [2024-11-09 17:33:49.348603] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.723 [2024-11-09 17:33:49.348609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.723 [2024-11-09 17:33:49.348618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348624] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.723 [2024-11-09 17:33:49.348631] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348655] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:29.723 [2024-11-09 17:33:49.348661] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:29.723 [2024-11-09 17:33:49.348667] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:29.724 [2024-11-09 17:33:49.348673] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:29.724 [2024-11-09 17:33:49.348687] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.724 [2024-11-09 17:33:49.348702] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.724 [2024-11-09 17:33:49.348719] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348731] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348737] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348748] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348757] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.724 [2024-11-09 17:33:49.348784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348796] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348805] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.724 [2024-11-09 17:33:49.348829] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348840] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348849] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.724 [2024-11-09 17:33:49.348871] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348883] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348894] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:24:29.724 [2024-11-09 17:33:49.348910] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183d00 00:24:29.724 [2024-11-09 17:33:49.348926] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:24:29.724 [2024-11-09 17:33:49.348942] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183d00 00:24:29.724 [2024-11-09 17:33:49.348958] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348977] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.348983] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.348989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.348997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.349004] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.349009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.349016] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.724 [2024-11-09 17:33:49.349022] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.724 [2024-11-09 17:33:49.349027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.724 [2024-11-09 17:33:49.349038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.724 ===================================================== 00:24:29.724 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.724 ===================================================== 00:24:29.724 Controller Capabilities/Features 00:24:29.724 ================================ 00:24:29.724 Vendor ID: 8086 00:24:29.724 Subsystem Vendor ID: 8086 00:24:29.724 Serial Number: SPDK00000000000001 00:24:29.724 Model Number: SPDK bdev Controller 00:24:29.724 Firmware Version: 24.01.1 00:24:29.724 Recommended Arb Burst: 6 00:24:29.724 IEEE OUI Identifier: e4 d2 5c 00:24:29.724 Multi-path I/O 00:24:29.724 May have multiple subsystem ports: Yes 00:24:29.724 May have multiple controllers: Yes 00:24:29.724 Associated with SR-IOV VF: No 00:24:29.724 Max Data Transfer Size: 131072 00:24:29.724 Max Number of Namespaces: 32 00:24:29.724 Max Number of I/O Queues: 127 00:24:29.724 NVMe Specification Version (VS): 1.3 00:24:29.724 NVMe Specification Version (Identify): 1.3 00:24:29.724 Maximum Queue Entries: 128 00:24:29.724 Contiguous Queues Required: Yes 00:24:29.724 Arbitration Mechanisms Supported 00:24:29.724 Weighted Round Robin: Not Supported 00:24:29.724 Vendor Specific: Not Supported 00:24:29.724 Reset Timeout: 15000 ms 00:24:29.724 Doorbell Stride: 4 bytes 00:24:29.724 NVM Subsystem Reset: Not Supported 00:24:29.724 Command Sets Supported 00:24:29.724 NVM Command Set: Supported 00:24:29.724 Boot Partition: Not Supported 00:24:29.724 Memory Page Size Minimum: 4096 bytes 00:24:29.724 Memory Page Size Maximum: 4096 bytes 00:24:29.724 Persistent Memory Region: Not Supported 00:24:29.724 Optional Asynchronous Events Supported 00:24:29.724 Namespace Attribute Notices: Supported 00:24:29.724 Firmware Activation Notices: Not Supported 00:24:29.724 ANA Change Notices: Not Supported 00:24:29.724 PLE Aggregate Log Change Notices: Not Supported 00:24:29.724 LBA Status Info Alert Notices: Not Supported 00:24:29.724 EGE Aggregate Log Change Notices: Not Supported 00:24:29.724 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.724 Zone Descriptor Change Notices: Not Supported 00:24:29.724 Discovery Log Change Notices: Not Supported 00:24:29.724 Controller Attributes 00:24:29.724 128-bit Host Identifier: Supported 00:24:29.724 Non-Operational Permissive Mode: Not Supported 00:24:29.724 NVM Sets: Not Supported 00:24:29.724 Read Recovery Levels: Not Supported 00:24:29.724 Endurance Groups: Not Supported 00:24:29.724 Predictable Latency Mode: Not Supported 00:24:29.724 Traffic Based Keep ALive: Not Supported 00:24:29.724 Namespace Granularity: Not Supported 00:24:29.724 SQ Associations: Not Supported 00:24:29.724 UUID List: Not Supported 00:24:29.724 Multi-Domain Subsystem: Not Supported 00:24:29.724 Fixed Capacity Management: Not Supported 00:24:29.724 Variable Capacity Management: Not Supported 00:24:29.724 Delete Endurance Group: Not Supported 00:24:29.724 Delete NVM Set: Not Supported 00:24:29.724 Extended LBA Formats Supported: Not Supported 00:24:29.724 Flexible Data Placement Supported: Not Supported 00:24:29.724 00:24:29.724 Controller Memory Buffer Support 00:24:29.724 ================================ 00:24:29.724 Supported: No 00:24:29.724 00:24:29.724 Persistent Memory Region Support 00:24:29.724 ================================ 00:24:29.724 Supported: No 00:24:29.724 00:24:29.724 Admin Command Set Attributes 00:24:29.724 ============================ 00:24:29.724 Security Send/Receive: Not Supported 00:24:29.724 Format NVM: Not Supported 00:24:29.724 Firmware Activate/Download: Not Supported 00:24:29.724 Namespace Management: Not Supported 00:24:29.724 Device Self-Test: Not Supported 00:24:29.724 Directives: Not Supported 00:24:29.724 NVMe-MI: Not Supported 00:24:29.724 Virtualization Management: Not Supported 00:24:29.724 Doorbell Buffer Config: Not Supported 00:24:29.724 Get LBA Status Capability: Not Supported 00:24:29.725 Command & Feature Lockdown Capability: Not Supported 00:24:29.725 Abort Command Limit: 4 00:24:29.725 Async Event Request Limit: 4 00:24:29.725 Number of Firmware Slots: N/A 00:24:29.725 Firmware Slot 1 Read-Only: N/A 00:24:29.725 Firmware Activation Without Reset: N/A 00:24:29.725 Multiple Update Detection Support: N/A 00:24:29.725 Firmware Update Granularity: No Information Provided 00:24:29.725 Per-Namespace SMART Log: No 00:24:29.725 Asymmetric Namespace Access Log Page: Not Supported 00:24:29.725 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:29.725 Command Effects Log Page: Supported 00:24:29.725 Get Log Page Extended Data: Supported 00:24:29.725 Telemetry Log Pages: Not Supported 00:24:29.725 Persistent Event Log Pages: Not Supported 00:24:29.725 Supported Log Pages Log Page: May Support 00:24:29.725 Commands Supported & Effects Log Page: Not Supported 00:24:29.725 Feature Identifiers & Effects Log Page:May Support 00:24:29.725 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.725 Data Area 4 for Telemetry Log: Not Supported 00:24:29.725 Error Log Page Entries Supported: 128 00:24:29.725 Keep Alive: Supported 00:24:29.725 Keep Alive Granularity: 10000 ms 00:24:29.725 00:24:29.725 NVM Command Set Attributes 00:24:29.725 ========================== 00:24:29.725 Submission Queue Entry Size 00:24:29.725 Max: 64 00:24:29.725 Min: 64 00:24:29.725 Completion Queue Entry Size 00:24:29.725 Max: 16 00:24:29.725 Min: 16 00:24:29.725 Number of Namespaces: 32 00:24:29.725 Compare Command: Supported 00:24:29.725 Write Uncorrectable Command: Not Supported 00:24:29.725 Dataset Management Command: Supported 00:24:29.725 Write Zeroes Command: Supported 00:24:29.725 Set Features Save Field: Not Supported 00:24:29.725 Reservations: Supported 00:24:29.725 Timestamp: Not Supported 00:24:29.725 Copy: Supported 00:24:29.725 Volatile Write Cache: Present 00:24:29.725 Atomic Write Unit (Normal): 1 00:24:29.725 Atomic Write Unit (PFail): 1 00:24:29.725 Atomic Compare & Write Unit: 1 00:24:29.725 Fused Compare & Write: Supported 00:24:29.725 Scatter-Gather List 00:24:29.725 SGL Command Set: Supported 00:24:29.725 SGL Keyed: Supported 00:24:29.725 SGL Bit Bucket Descriptor: Not Supported 00:24:29.725 SGL Metadata Pointer: Not Supported 00:24:29.725 Oversized SGL: Not Supported 00:24:29.725 SGL Metadata Address: Not Supported 00:24:29.725 SGL Offset: Supported 00:24:29.725 Transport SGL Data Block: Not Supported 00:24:29.725 Replay Protected Memory Block: Not Supported 00:24:29.725 00:24:29.725 Firmware Slot Information 00:24:29.725 ========================= 00:24:29.725 Active slot: 1 00:24:29.725 Slot 1 Firmware Revision: 24.01.1 00:24:29.725 00:24:29.725 00:24:29.725 Commands Supported and Effects 00:24:29.725 ============================== 00:24:29.725 Admin Commands 00:24:29.725 -------------- 00:24:29.725 Get Log Page (02h): Supported 00:24:29.725 Identify (06h): Supported 00:24:29.725 Abort (08h): Supported 00:24:29.725 Set Features (09h): Supported 00:24:29.725 Get Features (0Ah): Supported 00:24:29.725 Asynchronous Event Request (0Ch): Supported 00:24:29.725 Keep Alive (18h): Supported 00:24:29.725 I/O Commands 00:24:29.725 ------------ 00:24:29.725 Flush (00h): Supported LBA-Change 00:24:29.725 Write (01h): Supported LBA-Change 00:24:29.725 Read (02h): Supported 00:24:29.725 Compare (05h): Supported 00:24:29.725 Write Zeroes (08h): Supported LBA-Change 00:24:29.725 Dataset Management (09h): Supported LBA-Change 00:24:29.725 Copy (19h): Supported LBA-Change 00:24:29.725 Unknown (79h): Supported LBA-Change 00:24:29.725 Unknown (7Ah): Supported 00:24:29.725 00:24:29.725 Error Log 00:24:29.725 ========= 00:24:29.725 00:24:29.725 Arbitration 00:24:29.725 =========== 00:24:29.725 Arbitration Burst: 1 00:24:29.725 00:24:29.725 Power Management 00:24:29.725 ================ 00:24:29.725 Number of Power States: 1 00:24:29.725 Current Power State: Power State #0 00:24:29.725 Power State #0: 00:24:29.725 Max Power: 0.00 W 00:24:29.725 Non-Operational State: Operational 00:24:29.725 Entry Latency: Not Reported 00:24:29.725 Exit Latency: Not Reported 00:24:29.725 Relative Read Throughput: 0 00:24:29.725 Relative Read Latency: 0 00:24:29.725 Relative Write Throughput: 0 00:24:29.725 Relative Write Latency: 0 00:24:29.725 Idle Power: Not Reported 00:24:29.725 Active Power: Not Reported 00:24:29.725 Non-Operational Permissive Mode: Not Supported 00:24:29.725 00:24:29.725 Health Information 00:24:29.725 ================== 00:24:29.725 Critical Warnings: 00:24:29.725 Available Spare Space: OK 00:24:29.725 Temperature: OK 00:24:29.725 Device Reliability: OK 00:24:29.725 Read Only: No 00:24:29.725 Volatile Memory Backup: OK 00:24:29.725 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:29.725 Temperature Threshol[2024-11-09 17:33:49.349119] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:29.725 [2024-11-09 17:33:49.349127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.725 [2024-11-09 17:33:49.349144] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.725 [2024-11-09 17:33:49.349150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.725 [2024-11-09 17:33:49.349156] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.725 [2024-11-09 17:33:49.349181] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:29.725 [2024-11-09 17:33:49.349191] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36072 doesn't match qid 00:24:29.725 [2024-11-09 17:33:49.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:29.725 [2024-11-09 17:33:49.349212] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36072 doesn't match qid 00:24:29.725 [2024-11-09 17:33:49.349221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:29.725 [2024-11-09 17:33:49.349227] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36072 doesn't match qid 00:24:29.725 [2024-11-09 17:33:49.349235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:29.725 [2024-11-09 17:33:49.349241] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36072 doesn't match qid 00:24:29.725 [2024-11-09 17:33:49.349249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:29.725 [2024-11-09 17:33:49.349258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:29.725 [2024-11-09 17:33:49.349265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.725 [2024-11-09 17:33:49.349288] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.725 [2024-11-09 17:33:49.349294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349302] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349315] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349328] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349340] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:29.726 [2024-11-09 17:33:49.349346] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:29.726 [2024-11-09 17:33:49.349352] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349361] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349388] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349400] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349409] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349436] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349448] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349461] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349488] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349500] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349509] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349535] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349546] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349555] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349583] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349594] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349603] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349626] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349638] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349647] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349674] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349686] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349695] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349722] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349734] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349744] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349775] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349786] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349795] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349818] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349829] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349838] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349863] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:29.726 [2024-11-09 17:33:49.349875] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349883] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.726 [2024-11-09 17:33:49.349891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.726 [2024-11-09 17:33:49.349910] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.726 [2024-11-09 17:33:49.349916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.349922] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.349930] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.349938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.349961] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.349967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.349973] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.349982] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.349989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350011] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350022] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350034] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350077] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350102] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350114] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350122] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350144] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350155] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350164] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350189] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350201] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350209] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350232] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350244] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350252] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350280] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350292] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350301] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350330] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350342] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350350] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350383] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350394] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350403] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350424] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350436] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350444] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350482] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350491] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350518] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350530] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350538] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350564] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350576] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350585] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350614] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350626] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350634] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350660] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:29.727 [2024-11-09 17:33:49.350671] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350680] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.727 [2024-11-09 17:33:49.350687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.727 [2024-11-09 17:33:49.350709] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.727 [2024-11-09 17:33:49.350714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350720] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350729] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350752] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350764] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350772] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350796] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350807] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350816] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350839] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350852] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350861] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350884] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350896] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350904] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350933] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350945] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350953] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.350979] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.350984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.350990] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.350999] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351030] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351041] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351050] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351076] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351096] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351119] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351132] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351141] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351164] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351175] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351215] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351227] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351235] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351259] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351270] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351279] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351308] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351320] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351328] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351355] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351367] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351375] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351406] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.351411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.351417] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351426] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.351434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.728 [2024-11-09 17:33:49.351451] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.728 [2024-11-09 17:33:49.355463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:29.728 [2024-11-09 17:33:49.355470] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:29.728 [2024-11-09 17:33:49.355479] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:29.729 [2024-11-09 17:33:49.355487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:29.729 [2024-11-09 17:33:49.355505] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:29.729 [2024-11-09 17:33:49.355510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:24:29.729 [2024-11-09 17:33:49.355516] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:29.729 [2024-11-09 17:33:49.355523] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:29.729 d: 0 Kelvin (-273 Celsius) 00:24:29.729 Available Spare: 0% 00:24:29.729 Available Spare Threshold: 0% 00:24:29.729 Life Percentage Used: 0% 00:24:29.729 Data Units Read: 0 00:24:29.729 Data Units Written: 0 00:24:29.729 Host Read Commands: 0 00:24:29.729 Host Write Commands: 0 00:24:29.729 Controller Busy Time: 0 minutes 00:24:29.729 Power Cycles: 0 00:24:29.729 Power On Hours: 0 hours 00:24:29.729 Unsafe Shutdowns: 0 00:24:29.729 Unrecoverable Media Errors: 0 00:24:29.729 Lifetime Error Log Entries: 0 00:24:29.729 Warning Temperature Time: 0 minutes 00:24:29.729 Critical Temperature Time: 0 minutes 00:24:29.729 00:24:29.729 Number of Queues 00:24:29.729 ================ 00:24:29.729 Number of I/O Submission Queues: 127 00:24:29.729 Number of I/O Completion Queues: 127 00:24:29.729 00:24:29.729 Active Namespaces 00:24:29.729 ================= 00:24:29.729 Namespace ID:1 00:24:29.729 Error Recovery Timeout: Unlimited 00:24:29.729 Command Set Identifier: NVM (00h) 00:24:29.729 Deallocate: Supported 00:24:29.729 Deallocated/Unwritten Error: Not Supported 00:24:29.729 Deallocated Read Value: Unknown 00:24:29.729 Deallocate in Write Zeroes: Not Supported 00:24:29.729 Deallocated Guard Field: 0xFFFF 00:24:29.729 Flush: Supported 00:24:29.729 Reservation: Supported 00:24:29.729 Namespace Sharing Capabilities: Multiple Controllers 00:24:29.729 Size (in LBAs): 131072 (0GiB) 00:24:29.729 Capacity (in LBAs): 131072 (0GiB) 00:24:29.729 Utilization (in LBAs): 131072 (0GiB) 00:24:29.729 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:29.729 EUI64: ABCDEF0123456789 00:24:29.729 UUID: 4cf13ffa-8343-4af8-aeb1-650f546f557a 00:24:29.729 Thin Provisioning: Not Supported 00:24:29.729 Per-NS Atomic Units: Yes 00:24:29.729 Atomic Boundary Size (Normal): 0 00:24:29.729 Atomic Boundary Size (PFail): 0 00:24:29.729 Atomic Boundary Offset: 0 00:24:29.729 Maximum Single Source Range Length: 65535 00:24:29.729 Maximum Copy Length: 65535 00:24:29.729 Maximum Source Range Count: 1 00:24:29.729 NGUID/EUI64 Never Reused: No 00:24:29.729 Namespace Write Protected: No 00:24:29.729 Number of LBA Formats: 1 00:24:29.729 Current LBA Format: LBA Format #00 00:24:29.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:29.729 00:24:29.729 17:33:49 -- host/identify.sh@51 -- # sync 00:24:29.729 17:33:49 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.729 17:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.729 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:29.729 17:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.729 17:33:49 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:29.729 17:33:49 -- host/identify.sh@56 -- # nvmftestfini 00:24:29.729 17:33:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:29.729 17:33:49 -- nvmf/common.sh@116 -- # sync 00:24:29.729 17:33:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:29.729 17:33:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:29.729 17:33:49 -- nvmf/common.sh@119 -- # set +e 00:24:29.729 17:33:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:29.729 17:33:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:29.729 rmmod nvme_rdma 00:24:29.729 rmmod nvme_fabrics 00:24:29.729 17:33:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:29.729 17:33:49 -- nvmf/common.sh@123 -- # set -e 00:24:29.729 17:33:49 -- nvmf/common.sh@124 -- # return 0 00:24:29.729 17:33:49 -- nvmf/common.sh@477 -- # '[' -n 2795633 ']' 00:24:29.729 17:33:49 -- nvmf/common.sh@478 -- # killprocess 2795633 00:24:29.729 17:33:49 -- common/autotest_common.sh@936 -- # '[' -z 2795633 ']' 00:24:29.729 17:33:49 -- common/autotest_common.sh@940 -- # kill -0 2795633 00:24:29.729 17:33:49 -- common/autotest_common.sh@941 -- # uname 00:24:29.729 17:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:29.729 17:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2795633 00:24:29.990 17:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:29.990 17:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:29.990 17:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2795633' 00:24:29.990 killing process with pid 2795633 00:24:29.990 17:33:49 -- common/autotest_common.sh@955 -- # kill 2795633 00:24:29.990 [2024-11-09 17:33:49.524354] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:29.990 17:33:49 -- common/autotest_common.sh@960 -- # wait 2795633 00:24:30.250 17:33:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.250 17:33:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:30.250 00:24:30.250 real 0m8.909s 00:24:30.250 user 0m8.653s 00:24:30.250 sys 0m5.681s 00:24:30.250 17:33:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:30.250 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:30.250 ************************************ 00:24:30.250 END TEST nvmf_identify 00:24:30.250 ************************************ 00:24:30.250 17:33:49 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:30.250 17:33:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:30.250 17:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.250 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:24:30.250 ************************************ 00:24:30.250 START TEST nvmf_perf 00:24:30.250 ************************************ 00:24:30.250 17:33:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:30.250 * Looking for test storage... 00:24:30.250 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:30.250 17:33:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:30.250 17:33:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:30.250 17:33:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:30.511 17:33:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:30.511 17:33:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:30.511 17:33:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:30.511 17:33:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:30.511 17:33:50 -- scripts/common.sh@335 -- # IFS=.-: 00:24:30.511 17:33:50 -- scripts/common.sh@335 -- # read -ra ver1 00:24:30.511 17:33:50 -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.511 17:33:50 -- scripts/common.sh@336 -- # read -ra ver2 00:24:30.511 17:33:50 -- scripts/common.sh@337 -- # local 'op=<' 00:24:30.511 17:33:50 -- scripts/common.sh@339 -- # ver1_l=2 00:24:30.511 17:33:50 -- scripts/common.sh@340 -- # ver2_l=1 00:24:30.511 17:33:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:30.511 17:33:50 -- scripts/common.sh@343 -- # case "$op" in 00:24:30.511 17:33:50 -- scripts/common.sh@344 -- # : 1 00:24:30.511 17:33:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:30.511 17:33:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.511 17:33:50 -- scripts/common.sh@364 -- # decimal 1 00:24:30.511 17:33:50 -- scripts/common.sh@352 -- # local d=1 00:24:30.511 17:33:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.511 17:33:50 -- scripts/common.sh@354 -- # echo 1 00:24:30.511 17:33:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:30.511 17:33:50 -- scripts/common.sh@365 -- # decimal 2 00:24:30.511 17:33:50 -- scripts/common.sh@352 -- # local d=2 00:24:30.511 17:33:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.511 17:33:50 -- scripts/common.sh@354 -- # echo 2 00:24:30.511 17:33:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:30.511 17:33:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:30.511 17:33:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:30.511 17:33:50 -- scripts/common.sh@367 -- # return 0 00:24:30.511 17:33:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.511 17:33:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.511 --rc genhtml_branch_coverage=1 00:24:30.511 --rc genhtml_function_coverage=1 00:24:30.511 --rc genhtml_legend=1 00:24:30.511 --rc geninfo_all_blocks=1 00:24:30.511 --rc geninfo_unexecuted_blocks=1 00:24:30.511 00:24:30.511 ' 00:24:30.511 17:33:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.511 --rc genhtml_branch_coverage=1 00:24:30.511 --rc genhtml_function_coverage=1 00:24:30.511 --rc genhtml_legend=1 00:24:30.511 --rc geninfo_all_blocks=1 00:24:30.511 --rc geninfo_unexecuted_blocks=1 00:24:30.511 00:24:30.511 ' 00:24:30.511 17:33:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:30.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.512 --rc genhtml_branch_coverage=1 00:24:30.512 --rc genhtml_function_coverage=1 00:24:30.512 --rc genhtml_legend=1 00:24:30.512 --rc geninfo_all_blocks=1 00:24:30.512 --rc geninfo_unexecuted_blocks=1 00:24:30.512 00:24:30.512 ' 00:24:30.512 17:33:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:30.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.512 --rc genhtml_branch_coverage=1 00:24:30.512 --rc genhtml_function_coverage=1 00:24:30.512 --rc genhtml_legend=1 00:24:30.512 --rc geninfo_all_blocks=1 00:24:30.512 --rc geninfo_unexecuted_blocks=1 00:24:30.512 00:24:30.512 ' 00:24:30.512 17:33:50 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.512 17:33:50 -- nvmf/common.sh@7 -- # uname -s 00:24:30.512 17:33:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.512 17:33:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.512 17:33:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.512 17:33:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.512 17:33:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.512 17:33:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.512 17:33:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.512 17:33:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.512 17:33:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.512 17:33:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.512 17:33:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:30.512 17:33:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:30.512 17:33:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.512 17:33:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.512 17:33:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.512 17:33:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:30.512 17:33:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.512 17:33:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.512 17:33:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.512 17:33:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.512 17:33:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.512 17:33:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.512 17:33:50 -- paths/export.sh@5 -- # export PATH 00:24:30.512 17:33:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.512 17:33:50 -- nvmf/common.sh@46 -- # : 0 00:24:30.512 17:33:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:30.512 17:33:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:30.512 17:33:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:30.512 17:33:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.512 17:33:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.512 17:33:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:30.512 17:33:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:30.512 17:33:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:30.512 17:33:50 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:30.512 17:33:50 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:30.512 17:33:50 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:30.512 17:33:50 -- host/perf.sh@17 -- # nvmftestinit 00:24:30.512 17:33:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:30.512 17:33:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.512 17:33:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:30.512 17:33:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:30.512 17:33:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:30.512 17:33:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.512 17:33:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.512 17:33:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.512 17:33:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:30.512 17:33:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:30.512 17:33:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:30.512 17:33:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.093 17:33:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:37.093 17:33:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:37.093 17:33:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:37.093 17:33:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:37.093 17:33:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:37.093 17:33:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:37.093 17:33:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:37.093 17:33:56 -- nvmf/common.sh@294 -- # net_devs=() 00:24:37.093 17:33:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:37.093 17:33:56 -- nvmf/common.sh@295 -- # e810=() 00:24:37.093 17:33:56 -- nvmf/common.sh@295 -- # local -ga e810 00:24:37.093 17:33:56 -- nvmf/common.sh@296 -- # x722=() 00:24:37.093 17:33:56 -- nvmf/common.sh@296 -- # local -ga x722 00:24:37.093 17:33:56 -- nvmf/common.sh@297 -- # mlx=() 00:24:37.093 17:33:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:37.093 17:33:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.093 17:33:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:37.093 17:33:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:37.093 17:33:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:37.093 17:33:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:37.093 17:33:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:37.093 17:33:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.093 17:33:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:37.093 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:37.093 17:33:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:37.093 17:33:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.093 17:33:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:37.093 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:37.093 17:33:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:37.093 17:33:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:37.093 17:33:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:37.093 17:33:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.093 17:33:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.093 17:33:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.093 17:33:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.093 17:33:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:37.093 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:37.093 17:33:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.093 17:33:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.093 17:33:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.094 17:33:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.094 17:33:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.094 17:33:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:37.094 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.094 17:33:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:37.094 17:33:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:37.094 17:33:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:37.094 17:33:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:37.094 17:33:56 -- nvmf/common.sh@57 -- # uname 00:24:37.094 17:33:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:37.094 17:33:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:37.094 17:33:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:37.094 17:33:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:37.094 17:33:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:37.094 17:33:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:37.094 17:33:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:37.094 17:33:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:37.094 17:33:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:37.094 17:33:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:37.094 17:33:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:37.094 17:33:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:37.094 17:33:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:37.094 17:33:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:37.094 17:33:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:37.094 17:33:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:37.094 17:33:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@104 -- # continue 2 00:24:37.094 17:33:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@104 -- # continue 2 00:24:37.094 17:33:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:37.094 17:33:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:37.094 17:33:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:37.094 17:33:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:37.094 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.094 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:37.094 altname enp217s0f0np0 00:24:37.094 altname ens818f0np0 00:24:37.094 inet 192.168.100.8/24 scope global mlx_0_0 00:24:37.094 valid_lft forever preferred_lft forever 00:24:37.094 17:33:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:37.094 17:33:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:37.094 17:33:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:37.094 17:33:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:37.094 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:37.094 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:37.094 altname enp217s0f1np1 00:24:37.094 altname ens818f1np1 00:24:37.094 inet 192.168.100.9/24 scope global mlx_0_1 00:24:37.094 valid_lft forever preferred_lft forever 00:24:37.094 17:33:56 -- nvmf/common.sh@410 -- # return 0 00:24:37.094 17:33:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:37.094 17:33:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:37.094 17:33:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:37.094 17:33:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:37.094 17:33:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:37.094 17:33:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:37.094 17:33:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:37.094 17:33:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:37.094 17:33:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:37.094 17:33:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@104 -- # continue 2 00:24:37.094 17:33:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:37.094 17:33:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:37.094 17:33:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@104 -- # continue 2 00:24:37.094 17:33:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:37.094 17:33:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:37.094 17:33:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:37.094 17:33:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:37.094 17:33:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:37.094 17:33:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:37.094 192.168.100.9' 00:24:37.094 17:33:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:37.094 192.168.100.9' 00:24:37.094 17:33:56 -- nvmf/common.sh@445 -- # head -n 1 00:24:37.094 17:33:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:37.094 17:33:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:37.094 192.168.100.9' 00:24:37.094 17:33:56 -- nvmf/common.sh@446 -- # tail -n +2 00:24:37.094 17:33:56 -- nvmf/common.sh@446 -- # head -n 1 00:24:37.094 17:33:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:37.094 17:33:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:37.094 17:33:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:37.094 17:33:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:37.094 17:33:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:37.094 17:33:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:37.354 17:33:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:37.354 17:33:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:37.354 17:33:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.354 17:33:56 -- common/autotest_common.sh@10 -- # set +x 00:24:37.354 17:33:56 -- nvmf/common.sh@469 -- # nvmfpid=2799364 00:24:37.354 17:33:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.354 17:33:56 -- nvmf/common.sh@470 -- # waitforlisten 2799364 00:24:37.354 17:33:56 -- common/autotest_common.sh@829 -- # '[' -z 2799364 ']' 00:24:37.354 17:33:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.354 17:33:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.354 17:33:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.354 17:33:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.354 17:33:56 -- common/autotest_common.sh@10 -- # set +x 00:24:37.354 [2024-11-09 17:33:56.922267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:37.354 [2024-11-09 17:33:56.922312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.354 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.354 [2024-11-09 17:33:56.991030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.354 [2024-11-09 17:33:57.065113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:37.354 [2024-11-09 17:33:57.065219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.354 [2024-11-09 17:33:57.065229] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.354 [2024-11-09 17:33:57.065242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.354 [2024-11-09 17:33:57.065289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.355 [2024-11-09 17:33:57.065386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.355 [2024-11-09 17:33:57.065478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.355 [2024-11-09 17:33:57.065480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.295 17:33:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.295 17:33:57 -- common/autotest_common.sh@862 -- # return 0 00:24:38.295 17:33:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:38.295 17:33:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.295 17:33:57 -- common/autotest_common.sh@10 -- # set +x 00:24:38.295 17:33:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.295 17:33:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.295 17:33:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:41.592 17:34:00 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:41.592 17:34:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:41.592 17:34:01 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:41.592 17:34:01 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:41.592 17:34:01 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:41.592 17:34:01 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:41.592 17:34:01 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:41.592 17:34:01 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:41.592 17:34:01 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:24:41.852 [2024-11-09 17:34:01.411514] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:24:41.852 [2024-11-09 17:34:01.432033] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c16430/0x1c23fc0) succeed. 00:24:41.852 [2024-11-09 17:34:01.441430] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c17a20/0x1c65660) succeed. 00:24:41.852 17:34:01 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.113 17:34:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.113 17:34:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.373 17:34:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.373 17:34:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:42.373 17:34:02 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:42.633 [2024-11-09 17:34:02.288654] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:42.633 17:34:02 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:42.893 17:34:02 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:42.893 17:34:02 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:42.893 17:34:02 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:42.893 17:34:02 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:44.277 Initializing NVMe Controllers 00:24:44.277 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:44.277 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:44.277 Initialization complete. Launching workers. 00:24:44.277 ======================================================== 00:24:44.277 Latency(us) 00:24:44.277 Device Information : IOPS MiB/s Average min max 00:24:44.277 PCIE (0000:d8:00.0) NSID 1 from core 0: 102609.41 400.82 311.49 30.12 4338.38 00:24:44.277 ======================================================== 00:24:44.277 Total : 102609.41 400.82 311.49 30.12 4338.38 00:24:44.277 00:24:44.277 17:34:03 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:44.277 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.573 Initializing NVMe Controllers 00:24:47.573 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.573 Initialization complete. Launching workers. 00:24:47.573 ======================================================== 00:24:47.574 Latency(us) 00:24:47.574 Device Information : IOPS MiB/s Average min max 00:24:47.574 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6828.99 26.68 146.24 48.74 5037.73 00:24:47.574 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5286.99 20.65 188.95 66.95 5043.66 00:24:47.574 ======================================================== 00:24:47.574 Total : 12115.99 47.33 164.88 48.74 5043.66 00:24:47.574 00:24:47.574 17:34:07 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:47.574 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.871 Initializing NVMe Controllers 00:24:50.871 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:50.871 Initialization complete. Launching workers. 00:24:50.871 ======================================================== 00:24:50.871 Latency(us) 00:24:50.871 Device Information : IOPS MiB/s Average min max 00:24:50.871 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19592.00 76.53 1633.56 449.42 5680.23 00:24:50.871 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.25 6074.05 9049.49 00:24:50.871 ======================================================== 00:24:50.871 Total : 23624.00 92.28 2715.24 449.42 9049.49 00:24:50.871 00:24:50.871 17:34:10 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:24:50.871 17:34:10 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:50.871 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.157 Initializing NVMe Controllers 00:24:56.157 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.157 Controller IO queue size 128, less than required. 00:24:56.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.157 Controller IO queue size 128, less than required. 00:24:56.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:56.157 Initialization complete. Launching workers. 00:24:56.157 ======================================================== 00:24:56.157 Latency(us) 00:24:56.157 Device Information : IOPS MiB/s Average min max 00:24:56.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4110.50 1027.62 31297.14 14033.94 70688.81 00:24:56.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4156.50 1039.12 30609.13 14321.20 47962.71 00:24:56.157 ======================================================== 00:24:56.157 Total : 8267.00 2066.75 30951.22 14033.94 70688.81 00:24:56.157 00:24:56.157 17:34:14 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:24:56.157 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.157 No valid NVMe controllers or AIO or URING devices found 00:24:56.157 Initializing NVMe Controllers 00:24:56.157 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.157 Controller IO queue size 128, less than required. 00:24:56.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.157 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:56.157 Controller IO queue size 128, less than required. 00:24:56.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.157 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:56.157 WARNING: Some requested NVMe devices were skipped 00:24:56.157 17:34:15 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:24:56.157 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.358 Initializing NVMe Controllers 00:25:00.358 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.358 Controller IO queue size 128, less than required. 00:25:00.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.358 Controller IO queue size 128, less than required. 00:25:00.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.358 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:00.358 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:00.358 Initialization complete. Launching workers. 00:25:00.358 00:25:00.358 ==================== 00:25:00.358 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:00.358 RDMA transport: 00:25:00.358 dev name: mlx5_0 00:25:00.358 polls: 419468 00:25:00.358 idle_polls: 415244 00:25:00.358 completions: 46495 00:25:00.358 queued_requests: 1 00:25:00.358 total_send_wrs: 23311 00:25:00.358 send_doorbell_updates: 4018 00:25:00.358 total_recv_wrs: 23311 00:25:00.358 recv_doorbell_updates: 4018 00:25:00.358 --------------------------------- 00:25:00.358 00:25:00.358 ==================== 00:25:00.358 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:00.358 RDMA transport: 00:25:00.358 dev name: mlx5_0 00:25:00.358 polls: 422419 00:25:00.358 idle_polls: 422152 00:25:00.358 completions: 20297 00:25:00.358 queued_requests: 1 00:25:00.358 total_send_wrs: 10212 00:25:00.358 send_doorbell_updates: 252 00:25:00.358 total_recv_wrs: 10212 00:25:00.358 recv_doorbell_updates: 252 00:25:00.358 --------------------------------- 00:25:00.358 ======================================================== 00:25:00.358 Latency(us) 00:25:00.358 Device Information : IOPS MiB/s Average min max 00:25:00.358 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5859.50 1464.88 21911.08 11003.66 57058.21 00:25:00.358 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2584.50 646.12 49623.36 29121.61 76502.03 00:25:00.358 ======================================================== 00:25:00.358 Total : 8444.00 2111.00 30393.13 11003.66 76502.03 00:25:00.358 00:25:00.358 17:34:19 -- host/perf.sh@66 -- # sync 00:25:00.358 17:34:19 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.358 17:34:19 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:00.358 17:34:19 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:00.358 17:34:19 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:06.933 17:34:25 -- host/perf.sh@72 -- # ls_guid=2e1c9eed-c1fe-4191-b7b1-ae737bb40aab 00:25:06.933 17:34:25 -- host/perf.sh@73 -- # get_lvs_free_mb 2e1c9eed-c1fe-4191-b7b1-ae737bb40aab 00:25:06.933 17:34:25 -- common/autotest_common.sh@1353 -- # local lvs_uuid=2e1c9eed-c1fe-4191-b7b1-ae737bb40aab 00:25:06.933 17:34:25 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:06.933 17:34:25 -- common/autotest_common.sh@1355 -- # local fc 00:25:06.933 17:34:25 -- common/autotest_common.sh@1356 -- # local cs 00:25:06.933 17:34:25 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:06.933 17:34:25 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:06.933 { 00:25:06.933 "uuid": "2e1c9eed-c1fe-4191-b7b1-ae737bb40aab", 00:25:06.933 "name": "lvs_0", 00:25:06.933 "base_bdev": "Nvme0n1", 00:25:06.933 "total_data_clusters": 476466, 00:25:06.933 "free_clusters": 476466, 00:25:06.933 "block_size": 512, 00:25:06.933 "cluster_size": 4194304 00:25:06.933 } 00:25:06.933 ]' 00:25:06.933 17:34:25 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="2e1c9eed-c1fe-4191-b7b1-ae737bb40aab") .free_clusters' 00:25:06.933 17:34:26 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:06.933 17:34:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="2e1c9eed-c1fe-4191-b7b1-ae737bb40aab") .cluster_size' 00:25:06.933 17:34:26 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:06.933 17:34:26 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:06.933 17:34:26 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:06.933 1905864 00:25:06.933 17:34:26 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:06.933 17:34:26 -- host/perf.sh@78 -- # free_mb=20480 00:25:06.933 17:34:26 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2e1c9eed-c1fe-4191-b7b1-ae737bb40aab lbd_0 20480 00:25:06.933 17:34:26 -- host/perf.sh@80 -- # lb_guid=60c2f377-0fb5-492b-abaf-e36e478cbc49 00:25:06.933 17:34:26 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 60c2f377-0fb5-492b-abaf-e36e478cbc49 lvs_n_0 00:25:08.843 17:34:28 -- host/perf.sh@83 -- # ls_nested_guid=62e6beba-806a-4e73-84a5-ba9e50bb5840 00:25:08.843 17:34:28 -- host/perf.sh@84 -- # get_lvs_free_mb 62e6beba-806a-4e73-84a5-ba9e50bb5840 00:25:08.843 17:34:28 -- common/autotest_common.sh@1353 -- # local lvs_uuid=62e6beba-806a-4e73-84a5-ba9e50bb5840 00:25:08.843 17:34:28 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:08.843 17:34:28 -- common/autotest_common.sh@1355 -- # local fc 00:25:08.843 17:34:28 -- common/autotest_common.sh@1356 -- # local cs 00:25:08.843 17:34:28 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:09.102 17:34:28 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:09.102 { 00:25:09.102 "uuid": "2e1c9eed-c1fe-4191-b7b1-ae737bb40aab", 00:25:09.102 "name": "lvs_0", 00:25:09.102 "base_bdev": "Nvme0n1", 00:25:09.102 "total_data_clusters": 476466, 00:25:09.102 "free_clusters": 471346, 00:25:09.102 "block_size": 512, 00:25:09.102 "cluster_size": 4194304 00:25:09.102 }, 00:25:09.102 { 00:25:09.102 "uuid": "62e6beba-806a-4e73-84a5-ba9e50bb5840", 00:25:09.102 "name": "lvs_n_0", 00:25:09.102 "base_bdev": "60c2f377-0fb5-492b-abaf-e36e478cbc49", 00:25:09.102 "total_data_clusters": 5114, 00:25:09.102 "free_clusters": 5114, 00:25:09.102 "block_size": 512, 00:25:09.102 "cluster_size": 4194304 00:25:09.102 } 00:25:09.102 ]' 00:25:09.102 17:34:28 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="62e6beba-806a-4e73-84a5-ba9e50bb5840") .free_clusters' 00:25:09.102 17:34:28 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:09.102 17:34:28 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="62e6beba-806a-4e73-84a5-ba9e50bb5840") .cluster_size' 00:25:09.102 17:34:28 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:09.102 17:34:28 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:09.103 17:34:28 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:09.103 20456 00:25:09.103 17:34:28 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:09.103 17:34:28 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62e6beba-806a-4e73-84a5-ba9e50bb5840 lbd_nest_0 20456 00:25:09.362 17:34:28 -- host/perf.sh@88 -- # lb_nested_guid=133d7d92-a42e-4982-931d-5bb59f7558b2 00:25:09.362 17:34:28 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:09.623 17:34:29 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:09.623 17:34:29 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 133d7d92-a42e-4982-931d-5bb59f7558b2 00:25:09.623 17:34:29 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:09.883 17:34:29 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:09.883 17:34:29 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:09.883 17:34:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:09.883 17:34:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:09.883 17:34:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:09.883 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.218 Initializing NVMe Controllers 00:25:22.218 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.218 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:22.218 Initialization complete. Launching workers. 00:25:22.218 ======================================================== 00:25:22.218 Latency(us) 00:25:22.218 Device Information : IOPS MiB/s Average min max 00:25:22.218 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5995.70 2.93 166.45 67.18 8021.14 00:25:22.218 ======================================================== 00:25:22.218 Total : 5995.70 2.93 166.45 67.18 8021.14 00:25:22.218 00:25:22.218 17:34:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:22.218 17:34:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:22.218 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.443 Initializing NVMe Controllers 00:25:34.443 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.443 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.443 Initialization complete. Launching workers. 00:25:34.443 ======================================================== 00:25:34.443 Latency(us) 00:25:34.443 Device Information : IOPS MiB/s Average min max 00:25:34.443 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2680.20 335.02 372.91 154.30 6048.24 00:25:34.443 ======================================================== 00:25:34.443 Total : 2680.20 335.02 372.91 154.30 6048.24 00:25:34.443 00:25:34.443 17:34:52 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:34.443 17:34:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:34.443 17:34:52 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.429 Initializing NVMe Controllers 00:25:44.429 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.429 Initialization complete. Launching workers. 00:25:44.429 ======================================================== 00:25:44.429 Latency(us) 00:25:44.429 Device Information : IOPS MiB/s Average min max 00:25:44.429 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12357.70 6.03 2589.36 850.23 9040.78 00:25:44.429 ======================================================== 00:25:44.429 Total : 12357.70 6.03 2589.36 850.23 9040.78 00:25:44.429 00:25:44.429 17:35:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:44.429 17:35:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:44.429 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.642 Initializing NVMe Controllers 00:25:56.642 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.642 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:56.642 Initialization complete. Launching workers. 00:25:56.642 ======================================================== 00:25:56.642 Latency(us) 00:25:56.642 Device Information : IOPS MiB/s Average min max 00:25:56.642 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4004.80 500.60 7996.12 4897.14 15011.55 00:25:56.642 ======================================================== 00:25:56.642 Total : 4004.80 500.60 7996.12 4897.14 15011.55 00:25:56.642 00:25:56.642 17:35:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:56.642 17:35:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:56.642 17:35:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:56.642 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.627 Initializing NVMe Controllers 00:26:06.627 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.627 Controller IO queue size 128, less than required. 00:26:06.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.627 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:06.627 Initialization complete. Launching workers. 00:26:06.627 ======================================================== 00:26:06.627 Latency(us) 00:26:06.627 Device Information : IOPS MiB/s Average min max 00:26:06.627 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19839.60 9.69 6453.96 1782.14 14865.00 00:26:06.627 ======================================================== 00:26:06.627 Total : 19839.60 9.69 6453.96 1782.14 14865.00 00:26:06.627 00:26:06.894 17:35:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:06.894 17:35:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:06.894 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.115 Initializing NVMe Controllers 00:26:19.115 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.115 Controller IO queue size 128, less than required. 00:26:19.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:19.115 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:19.115 Initialization complete. Launching workers. 00:26:19.115 ======================================================== 00:26:19.115 Latency(us) 00:26:19.115 Device Information : IOPS MiB/s Average min max 00:26:19.115 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11436.30 1429.54 11192.51 3199.86 23795.41 00:26:19.115 ======================================================== 00:26:19.115 Total : 11436.30 1429.54 11192.51 3199.86 23795.41 00:26:19.115 00:26:19.115 17:35:37 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.115 17:35:37 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 133d7d92-a42e-4982-931d-5bb59f7558b2 00:26:19.115 17:35:38 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:19.115 17:35:38 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60c2f377-0fb5-492b-abaf-e36e478cbc49 00:26:19.374 17:35:38 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:19.636 17:35:39 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:19.636 17:35:39 -- host/perf.sh@114 -- # nvmftestfini 00:26:19.636 17:35:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:19.636 17:35:39 -- nvmf/common.sh@116 -- # sync 00:26:19.636 17:35:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:19.636 17:35:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:19.636 17:35:39 -- nvmf/common.sh@119 -- # set +e 00:26:19.636 17:35:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:19.636 17:35:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:19.636 rmmod nvme_rdma 00:26:19.636 rmmod nvme_fabrics 00:26:19.636 17:35:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:19.636 17:35:39 -- nvmf/common.sh@123 -- # set -e 00:26:19.636 17:35:39 -- nvmf/common.sh@124 -- # return 0 00:26:19.636 17:35:39 -- nvmf/common.sh@477 -- # '[' -n 2799364 ']' 00:26:19.636 17:35:39 -- nvmf/common.sh@478 -- # killprocess 2799364 00:26:19.636 17:35:39 -- common/autotest_common.sh@936 -- # '[' -z 2799364 ']' 00:26:19.636 17:35:39 -- common/autotest_common.sh@940 -- # kill -0 2799364 00:26:19.636 17:35:39 -- common/autotest_common.sh@941 -- # uname 00:26:19.636 17:35:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.636 17:35:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2799364 00:26:19.636 17:35:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:19.636 17:35:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:19.636 17:35:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2799364' 00:26:19.636 killing process with pid 2799364 00:26:19.636 17:35:39 -- common/autotest_common.sh@955 -- # kill 2799364 00:26:19.636 17:35:39 -- common/autotest_common.sh@960 -- # wait 2799364 00:26:22.173 17:35:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:22.173 17:35:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:26:22.173 00:26:22.173 real 1m51.853s 00:26:22.173 user 7m2.199s 00:26:22.173 sys 0m7.271s 00:26:22.173 17:35:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:22.173 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:26:22.173 ************************************ 00:26:22.173 END TEST nvmf_perf 00:26:22.173 ************************************ 00:26:22.173 17:35:41 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:22.173 17:35:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:22.173 17:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:22.173 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:26:22.173 ************************************ 00:26:22.173 START TEST nvmf_fio_host 00:26:22.173 ************************************ 00:26:22.173 17:35:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:22.173 * Looking for test storage... 00:26:22.173 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:22.173 17:35:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:22.173 17:35:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:22.173 17:35:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:22.173 17:35:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:22.173 17:35:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:22.173 17:35:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:22.173 17:35:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:22.173 17:35:41 -- scripts/common.sh@335 -- # IFS=.-: 00:26:22.173 17:35:41 -- scripts/common.sh@335 -- # read -ra ver1 00:26:22.173 17:35:41 -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.173 17:35:41 -- scripts/common.sh@336 -- # read -ra ver2 00:26:22.173 17:35:41 -- scripts/common.sh@337 -- # local 'op=<' 00:26:22.173 17:35:41 -- scripts/common.sh@339 -- # ver1_l=2 00:26:22.173 17:35:41 -- scripts/common.sh@340 -- # ver2_l=1 00:26:22.173 17:35:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:22.173 17:35:41 -- scripts/common.sh@343 -- # case "$op" in 00:26:22.173 17:35:41 -- scripts/common.sh@344 -- # : 1 00:26:22.173 17:35:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:22.173 17:35:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.173 17:35:41 -- scripts/common.sh@364 -- # decimal 1 00:26:22.173 17:35:41 -- scripts/common.sh@352 -- # local d=1 00:26:22.173 17:35:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.173 17:35:41 -- scripts/common.sh@354 -- # echo 1 00:26:22.173 17:35:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:22.173 17:35:41 -- scripts/common.sh@365 -- # decimal 2 00:26:22.173 17:35:41 -- scripts/common.sh@352 -- # local d=2 00:26:22.173 17:35:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.173 17:35:41 -- scripts/common.sh@354 -- # echo 2 00:26:22.173 17:35:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:22.173 17:35:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:22.173 17:35:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:22.173 17:35:41 -- scripts/common.sh@367 -- # return 0 00:26:22.173 17:35:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.173 17:35:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:22.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.173 --rc genhtml_branch_coverage=1 00:26:22.173 --rc genhtml_function_coverage=1 00:26:22.173 --rc genhtml_legend=1 00:26:22.173 --rc geninfo_all_blocks=1 00:26:22.173 --rc geninfo_unexecuted_blocks=1 00:26:22.173 00:26:22.173 ' 00:26:22.173 17:35:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:22.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.173 --rc genhtml_branch_coverage=1 00:26:22.173 --rc genhtml_function_coverage=1 00:26:22.173 --rc genhtml_legend=1 00:26:22.173 --rc geninfo_all_blocks=1 00:26:22.173 --rc geninfo_unexecuted_blocks=1 00:26:22.173 00:26:22.173 ' 00:26:22.173 17:35:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:22.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.173 --rc genhtml_branch_coverage=1 00:26:22.173 --rc genhtml_function_coverage=1 00:26:22.173 --rc genhtml_legend=1 00:26:22.173 --rc geninfo_all_blocks=1 00:26:22.173 --rc geninfo_unexecuted_blocks=1 00:26:22.173 00:26:22.173 ' 00:26:22.434 17:35:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:22.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.434 --rc genhtml_branch_coverage=1 00:26:22.434 --rc genhtml_function_coverage=1 00:26:22.434 --rc genhtml_legend=1 00:26:22.434 --rc geninfo_all_blocks=1 00:26:22.434 --rc geninfo_unexecuted_blocks=1 00:26:22.434 00:26:22.434 ' 00:26:22.434 17:35:41 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:22.434 17:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.434 17:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.434 17:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.434 17:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@5 -- # export PATH 00:26:22.434 17:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.434 17:35:41 -- nvmf/common.sh@7 -- # uname -s 00:26:22.434 17:35:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.434 17:35:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.434 17:35:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.434 17:35:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.434 17:35:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.434 17:35:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.434 17:35:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.434 17:35:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.434 17:35:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.434 17:35:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.434 17:35:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:22.434 17:35:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:22.434 17:35:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.434 17:35:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.434 17:35:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.434 17:35:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:22.434 17:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.434 17:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.434 17:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.434 17:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- paths/export.sh@5 -- # export PATH 00:26:22.434 17:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.434 17:35:41 -- nvmf/common.sh@46 -- # : 0 00:26:22.434 17:35:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:22.434 17:35:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:22.434 17:35:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:22.434 17:35:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.434 17:35:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.434 17:35:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:22.434 17:35:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:22.434 17:35:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:22.434 17:35:41 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:22.434 17:35:41 -- host/fio.sh@14 -- # nvmftestinit 00:26:22.434 17:35:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:26:22.434 17:35:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.434 17:35:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:22.434 17:35:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:22.434 17:35:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:22.434 17:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.434 17:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.434 17:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.434 17:35:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:22.434 17:35:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:22.434 17:35:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:22.434 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:26:29.007 17:35:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:29.007 17:35:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:29.007 17:35:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:29.007 17:35:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:29.007 17:35:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:29.007 17:35:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:29.007 17:35:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:29.007 17:35:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:29.007 17:35:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:29.007 17:35:47 -- nvmf/common.sh@295 -- # e810=() 00:26:29.007 17:35:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:29.007 17:35:47 -- nvmf/common.sh@296 -- # x722=() 00:26:29.007 17:35:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:29.007 17:35:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:29.007 17:35:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:29.007 17:35:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.007 17:35:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:29.007 17:35:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:26:29.007 17:35:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:26:29.007 17:35:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:26:29.007 17:35:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:29.007 17:35:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:29.007 17:35:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:29.007 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:29.007 17:35:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:29.007 17:35:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:29.007 17:35:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:29.007 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:29.007 17:35:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:29.007 17:35:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:29.007 17:35:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:26:29.007 17:35:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.008 17:35:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:29.008 17:35:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.008 17:35:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:29.008 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.008 17:35:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.008 17:35:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:29.008 17:35:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.008 17:35:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:29.008 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.008 17:35:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:29.008 17:35:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:29.008 17:35:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:26:29.008 17:35:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:26:29.008 17:35:47 -- nvmf/common.sh@57 -- # uname 00:26:29.008 17:35:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:26:29.008 17:35:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:26:29.008 17:35:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:26:29.008 17:35:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:26:29.008 17:35:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:26:29.008 17:35:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:26:29.008 17:35:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:26:29.008 17:35:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:26:29.008 17:35:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:26:29.008 17:35:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:29.008 17:35:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:26:29.008 17:35:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:29.008 17:35:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:29.008 17:35:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:29.008 17:35:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:29.008 17:35:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:29.008 17:35:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@104 -- # continue 2 00:26:29.008 17:35:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@104 -- # continue 2 00:26:29.008 17:35:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:29.008 17:35:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:29.008 17:35:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:26:29.008 17:35:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:26:29.008 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:29.008 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:29.008 altname enp217s0f0np0 00:26:29.008 altname ens818f0np0 00:26:29.008 inet 192.168.100.8/24 scope global mlx_0_0 00:26:29.008 valid_lft forever preferred_lft forever 00:26:29.008 17:35:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:29.008 17:35:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:29.008 17:35:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:26:29.008 17:35:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:26:29.008 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:29.008 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:29.008 altname enp217s0f1np1 00:26:29.008 altname ens818f1np1 00:26:29.008 inet 192.168.100.9/24 scope global mlx_0_1 00:26:29.008 valid_lft forever preferred_lft forever 00:26:29.008 17:35:47 -- nvmf/common.sh@410 -- # return 0 00:26:29.008 17:35:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:29.008 17:35:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:29.008 17:35:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:26:29.008 17:35:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:26:29.008 17:35:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:29.008 17:35:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:29.008 17:35:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:29.008 17:35:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:29.008 17:35:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:29.008 17:35:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@104 -- # continue 2 00:26:29.008 17:35:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:29.008 17:35:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:29.008 17:35:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@104 -- # continue 2 00:26:29.008 17:35:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:29.008 17:35:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:29.008 17:35:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:29.008 17:35:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:29.008 17:35:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:29.008 17:35:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:26:29.008 192.168.100.9' 00:26:29.008 17:35:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:26:29.008 192.168.100.9' 00:26:29.008 17:35:47 -- nvmf/common.sh@445 -- # head -n 1 00:26:29.008 17:35:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:29.008 17:35:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:26:29.008 192.168.100.9' 00:26:29.008 17:35:47 -- nvmf/common.sh@446 -- # tail -n +2 00:26:29.008 17:35:47 -- nvmf/common.sh@446 -- # head -n 1 00:26:29.008 17:35:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:29.008 17:35:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:26:29.008 17:35:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:29.008 17:35:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:26:29.008 17:35:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:26:29.008 17:35:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:26:29.008 17:35:47 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:29.008 17:35:47 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:29.008 17:35:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:29.008 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:26:29.008 17:35:48 -- host/fio.sh@24 -- # nvmfpid=2820108 00:26:29.008 17:35:48 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.008 17:35:48 -- host/fio.sh@28 -- # waitforlisten 2820108 00:26:29.008 17:35:48 -- common/autotest_common.sh@829 -- # '[' -z 2820108 ']' 00:26:29.008 17:35:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.008 17:35:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.008 17:35:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.008 17:35:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.008 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:26:29.008 17:35:48 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:29.008 [2024-11-09 17:35:48.047733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:29.009 [2024-11-09 17:35:48.047779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.009 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.009 [2024-11-09 17:35:48.117039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.009 [2024-11-09 17:35:48.191518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:29.009 [2024-11-09 17:35:48.191625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.009 [2024-11-09 17:35:48.191635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.009 [2024-11-09 17:35:48.191644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.009 [2024-11-09 17:35:48.191694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.009 [2024-11-09 17:35:48.191713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.009 [2024-11-09 17:35:48.191799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.009 [2024-11-09 17:35:48.191801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.268 17:35:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.268 17:35:48 -- common/autotest_common.sh@862 -- # return 0 00:26:29.268 17:35:48 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:29.527 [2024-11-09 17:35:49.061400] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x577090/0x57b580) succeed. 00:26:29.527 [2024-11-09 17:35:49.070526] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x578680/0x5bcc20) succeed. 00:26:29.527 17:35:49 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:29.527 17:35:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.527 17:35:49 -- common/autotest_common.sh@10 -- # set +x 00:26:29.527 17:35:49 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:29.786 Malloc1 00:26:29.786 17:35:49 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.050 17:35:49 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:30.313 17:35:49 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:30.313 [2024-11-09 17:35:49.970587] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:30.313 17:35:49 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:30.573 17:35:50 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:30.573 17:35:50 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:30.573 17:35:50 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:30.573 17:35:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:30.573 17:35:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.573 17:35:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:30.573 17:35:50 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.573 17:35:50 -- common/autotest_common.sh@1330 -- # shift 00:26:30.573 17:35:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:30.573 17:35:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:30.573 17:35:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:30.573 17:35:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:30.573 17:35:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:30.573 17:35:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:30.573 17:35:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:30.573 17:35:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:30.833 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:30.833 fio-3.35 00:26:30.833 Starting 1 thread 00:26:30.833 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.385 00:26:33.385 test: (groupid=0, jobs=1): err= 0: pid=2820800: Sat Nov 9 17:35:52 2024 00:26:33.386 read: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(148MiB/2003msec) 00:26:33.386 slat (nsec): min=1330, max=27743, avg=1448.61, stdev=436.96 00:26:33.386 clat (usec): min=1853, max=6029, avg=3356.76, stdev=67.15 00:26:33.386 lat (usec): min=1880, max=6031, avg=3358.21, stdev=67.09 00:26:33.386 clat percentiles (usec): 00:26:33.386 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3359], 00:26:33.386 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:26:33.386 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3392], 00:26:33.386 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4293], 99.95th=[ 5145], 00:26:33.386 | 99.99th=[ 5997] 00:26:33.386 bw ( KiB/s): min=74080, max=76296, per=99.98%, avg=75708.00, stdev=1086.64, samples=4 00:26:33.386 iops : min=18520, max=19074, avg=18927.00, stdev=271.66, samples=4 00:26:33.386 write: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(148MiB/2003msec); 0 zone resets 00:26:33.386 slat (nsec): min=1364, max=17442, avg=1534.50, stdev=449.20 00:26:33.386 clat (usec): min=2587, max=6041, avg=3355.67, stdev=76.14 00:26:33.386 lat (usec): min=2598, max=6042, avg=3357.21, stdev=76.08 00:26:33.386 clat percentiles (usec): 00:26:33.386 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:26:33.386 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:26:33.386 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3392], 00:26:33.386 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4359], 99.95th=[ 5538], 00:26:33.386 | 99.99th=[ 5997] 00:26:33.386 bw ( KiB/s): min=74112, max=76360, per=99.98%, avg=75732.00, stdev=1083.21, samples=4 00:26:33.386 iops : min=18528, max=19090, avg=18933.00, stdev=270.80, samples=4 00:26:33.386 lat (msec) : 2=0.01%, 4=99.89%, 10=0.11% 00:26:33.386 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:26:33.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:33.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:33.386 issued rwts: total=37918,37929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.386 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:33.386 00:26:33.386 Run status group 0 (all jobs): 00:26:33.386 READ: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=148MiB (155MB), run=2003-2003msec 00:26:33.386 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=148MiB (155MB), run=2003-2003msec 00:26:33.386 17:35:52 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:33.386 17:35:52 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:33.386 17:35:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:33.386 17:35:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.386 17:35:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:33.386 17:35:52 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.386 17:35:52 -- common/autotest_common.sh@1330 -- # shift 00:26:33.386 17:35:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:33.386 17:35:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:33.386 17:35:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:33.386 17:35:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:33.386 17:35:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:33.386 17:35:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:33.386 17:35:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:33.386 17:35:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:33.653 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:33.653 fio-3.35 00:26:33.653 Starting 1 thread 00:26:33.653 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.279 00:26:36.279 test: (groupid=0, jobs=1): err= 0: pid=2821254: Sat Nov 9 17:35:55 2024 00:26:36.279 read: IOPS=14.9k, BW=233MiB/s (245MB/s)(459MiB/1967msec) 00:26:36.279 slat (nsec): min=2214, max=42486, avg=2575.45, stdev=962.44 00:26:36.279 clat (usec): min=481, max=7790, avg=1564.77, stdev=1238.57 00:26:36.279 lat (usec): min=483, max=7809, avg=1567.34, stdev=1238.94 00:26:36.279 clat percentiles (usec): 00:26:36.279 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 889], 00:26:36.279 | 30.00th=[ 955], 40.00th=[ 1037], 50.00th=[ 1139], 60.00th=[ 1254], 00:26:36.279 | 70.00th=[ 1369], 80.00th=[ 1565], 90.00th=[ 3752], 95.00th=[ 4686], 00:26:36.279 | 99.00th=[ 6128], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7177], 00:26:36.279 | 99.99th=[ 7767] 00:26:36.279 bw ( KiB/s): min=103136, max=121408, per=48.45%, avg=115808.00, stdev=8649.52, samples=4 00:26:36.279 iops : min= 6446, max= 7588, avg=7238.00, stdev=540.59, samples=4 00:26:36.279 write: IOPS=8405, BW=131MiB/s (138MB/s)(235MiB/1787msec); 0 zone resets 00:26:36.279 slat (usec): min=26, max=122, avg=28.87, stdev= 5.41 00:26:36.279 clat (usec): min=3873, max=18543, avg=12042.32, stdev=1719.15 00:26:36.279 lat (usec): min=3902, max=18572, avg=12071.20, stdev=1718.72 00:26:36.279 clat percentiles (usec): 00:26:36.279 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10814], 00:26:36.279 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:26:36.279 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14091], 95.00th=[14615], 00:26:36.279 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17957], 99.95th=[18220], 00:26:36.279 | 99.99th=[18482] 00:26:36.279 bw ( KiB/s): min=106464, max=126432, per=88.68%, avg=119272.00, stdev=9218.24, samples=4 00:26:36.279 iops : min= 6654, max= 7902, avg=7454.50, stdev=576.14, samples=4 00:26:36.279 lat (usec) : 500=0.01%, 750=3.25%, 1000=20.38% 00:26:36.279 lat (msec) : 2=33.93%, 4=2.19%, 10=9.40%, 20=30.84% 00:26:36.279 cpu : usr=95.81%, sys=2.10%, ctx=226, majf=0, minf=1 00:26:36.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:36.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:36.279 issued rwts: total=29384,15021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:36.279 00:26:36.279 Run status group 0 (all jobs): 00:26:36.279 READ: bw=233MiB/s (245MB/s), 233MiB/s-233MiB/s (245MB/s-245MB/s), io=459MiB (481MB), run=1967-1967msec 00:26:36.279 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=235MiB (246MB), run=1787-1787msec 00:26:36.279 17:35:55 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.279 17:35:55 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:36.279 17:35:55 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:36.279 17:35:55 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:36.279 17:35:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:36.279 17:35:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:36.279 17:35:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:36.279 17:35:55 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:36.279 17:35:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:36.279 17:35:55 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:36.279 17:35:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:26:36.279 17:35:55 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:26:39.568 Nvme0n1 00:26:39.568 17:35:58 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:44.841 17:36:04 -- host/fio.sh@53 -- # ls_guid=188653f5-2cef-4c7a-9cd3-ec71d95a9018 00:26:44.841 17:36:04 -- host/fio.sh@54 -- # get_lvs_free_mb 188653f5-2cef-4c7a-9cd3-ec71d95a9018 00:26:44.841 17:36:04 -- common/autotest_common.sh@1353 -- # local lvs_uuid=188653f5-2cef-4c7a-9cd3-ec71d95a9018 00:26:44.841 17:36:04 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:44.841 17:36:04 -- common/autotest_common.sh@1355 -- # local fc 00:26:44.841 17:36:04 -- common/autotest_common.sh@1356 -- # local cs 00:26:44.841 17:36:04 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:45.100 17:36:04 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:45.100 { 00:26:45.100 "uuid": "188653f5-2cef-4c7a-9cd3-ec71d95a9018", 00:26:45.100 "name": "lvs_0", 00:26:45.100 "base_bdev": "Nvme0n1", 00:26:45.100 "total_data_clusters": 1862, 00:26:45.100 "free_clusters": 1862, 00:26:45.100 "block_size": 512, 00:26:45.100 "cluster_size": 1073741824 00:26:45.100 } 00:26:45.100 ]' 00:26:45.100 17:36:04 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="188653f5-2cef-4c7a-9cd3-ec71d95a9018") .free_clusters' 00:26:45.100 17:36:04 -- common/autotest_common.sh@1358 -- # fc=1862 00:26:45.100 17:36:04 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="188653f5-2cef-4c7a-9cd3-ec71d95a9018") .cluster_size' 00:26:45.100 17:36:04 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:26:45.100 17:36:04 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:26:45.100 17:36:04 -- common/autotest_common.sh@1363 -- # echo 1906688 00:26:45.100 1906688 00:26:45.100 17:36:04 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:26:45.668 3640e8e0-d47a-49d9-88af-a5dc7d30f6bd 00:26:45.668 17:36:05 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:45.668 17:36:05 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:45.927 17:36:05 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:46.185 17:36:05 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:46.185 17:36:05 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:46.185 17:36:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:46.185 17:36:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.185 17:36:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:46.185 17:36:05 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.185 17:36:05 -- common/autotest_common.sh@1330 -- # shift 00:26:46.185 17:36:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:46.185 17:36:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:46.186 17:36:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:46.186 17:36:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:46.186 17:36:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:46.186 17:36:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:46.186 17:36:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:46.186 17:36:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:46.444 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:46.444 fio-3.35 00:26:46.444 Starting 1 thread 00:26:46.703 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.237 00:26:49.237 test: (groupid=0, jobs=1): err= 0: pid=2824178: Sat Nov 9 17:36:08 2024 00:26:49.237 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2004msec) 00:26:49.237 slat (nsec): min=1453, max=20308, avg=1492.88, stdev=188.59 00:26:49.237 clat (usec): min=183, max=332772, avg=6124.42, stdev=18238.36 00:26:49.237 lat (usec): min=185, max=332775, avg=6125.91, stdev=18238.38 00:26:49.237 clat percentiles (msec): 00:26:49.237 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:26:49.237 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:26:49.237 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:26:49.237 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:26:49.237 | 99.99th=[ 334] 00:26:49.237 bw ( KiB/s): min=15928, max=50352, per=99.85%, avg=41520.00, stdev=17065.98, samples=4 00:26:49.237 iops : min= 3982, max=12588, avg=10380.00, stdev=4266.49, samples=4 00:26:49.237 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2004msec); 0 zone resets 00:26:49.237 slat (nsec): min=1488, max=6152, avg=1678.65, stdev=181.23 00:26:49.237 clat (usec): min=155, max=333102, avg=6097.30, stdev=17734.38 00:26:49.237 lat (usec): min=157, max=333105, avg=6098.98, stdev=17734.43 00:26:49.237 clat percentiles (msec): 00:26:49.237 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:26:49.237 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:26:49.238 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:26:49.238 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:26:49.238 | 99.99th=[ 334] 00:26:49.238 bw ( KiB/s): min=16822, max=50104, per=99.90%, avg=41551.50, stdev=16487.82, samples=4 00:26:49.238 iops : min= 4205, max=12526, avg=10387.75, stdev=4122.20, samples=4 00:26:49.238 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:26:49.238 lat (msec) : 2=0.04%, 4=0.27%, 10=99.34%, 500=0.31% 00:26:49.238 cpu : usr=99.65%, sys=0.05%, ctx=16, majf=0, minf=2 00:26:49.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:49.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.238 issued rwts: total=20833,20838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.238 00:26:49.238 Run status group 0 (all jobs): 00:26:49.238 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.3MB), run=2004-2004msec 00:26:49.238 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2004-2004msec 00:26:49.238 17:36:08 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:49.238 17:36:08 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:50.619 17:36:10 -- host/fio.sh@64 -- # ls_nested_guid=1a34158b-f7b5-4ecc-a11e-f9fd71df0128 00:26:50.619 17:36:10 -- host/fio.sh@65 -- # get_lvs_free_mb 1a34158b-f7b5-4ecc-a11e-f9fd71df0128 00:26:50.619 17:36:10 -- common/autotest_common.sh@1353 -- # local lvs_uuid=1a34158b-f7b5-4ecc-a11e-f9fd71df0128 00:26:50.619 17:36:10 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:50.619 17:36:10 -- common/autotest_common.sh@1355 -- # local fc 00:26:50.619 17:36:10 -- common/autotest_common.sh@1356 -- # local cs 00:26:50.619 17:36:10 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:50.619 17:36:10 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:50.619 { 00:26:50.619 "uuid": "188653f5-2cef-4c7a-9cd3-ec71d95a9018", 00:26:50.619 "name": "lvs_0", 00:26:50.619 "base_bdev": "Nvme0n1", 00:26:50.619 "total_data_clusters": 1862, 00:26:50.619 "free_clusters": 0, 00:26:50.619 "block_size": 512, 00:26:50.619 "cluster_size": 1073741824 00:26:50.619 }, 00:26:50.619 { 00:26:50.619 "uuid": "1a34158b-f7b5-4ecc-a11e-f9fd71df0128", 00:26:50.619 "name": "lvs_n_0", 00:26:50.619 "base_bdev": "3640e8e0-d47a-49d9-88af-a5dc7d30f6bd", 00:26:50.619 "total_data_clusters": 476206, 00:26:50.619 "free_clusters": 476206, 00:26:50.619 "block_size": 512, 00:26:50.619 "cluster_size": 4194304 00:26:50.619 } 00:26:50.619 ]' 00:26:50.619 17:36:10 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="1a34158b-f7b5-4ecc-a11e-f9fd71df0128") .free_clusters' 00:26:50.619 17:36:10 -- common/autotest_common.sh@1358 -- # fc=476206 00:26:50.619 17:36:10 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="1a34158b-f7b5-4ecc-a11e-f9fd71df0128") .cluster_size' 00:26:50.619 17:36:10 -- common/autotest_common.sh@1359 -- # cs=4194304 00:26:50.619 17:36:10 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:26:50.619 17:36:10 -- common/autotest_common.sh@1363 -- # echo 1904824 00:26:50.619 1904824 00:26:50.619 17:36:10 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:26:51.558 aff0f049-31d8-4689-802d-b09b3145c125 00:26:51.558 17:36:11 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:51.818 17:36:11 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:51.818 17:36:11 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:52.078 17:36:11 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:52.079 17:36:11 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:52.079 17:36:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:52.079 17:36:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.079 17:36:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:52.079 17:36:11 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.079 17:36:11 -- common/autotest_common.sh@1330 -- # shift 00:26:52.079 17:36:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:52.079 17:36:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:52.079 17:36:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:52.079 17:36:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:52.079 17:36:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:52.079 17:36:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:52.079 17:36:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:52.079 17:36:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:52.337 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:52.337 fio-3.35 00:26:52.337 Starting 1 thread 00:26:52.595 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.133 00:26:55.133 test: (groupid=0, jobs=1): err= 0: pid=2825298: Sat Nov 9 17:36:14 2024 00:26:55.133 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.6MiB/2005msec) 00:26:55.133 slat (nsec): min=1337, max=21585, avg=1470.07, stdev=238.76 00:26:55.133 clat (usec): min=2955, max=10191, avg=5926.75, stdev=185.69 00:26:55.133 lat (usec): min=2958, max=10192, avg=5928.22, stdev=185.65 00:26:55.133 clat percentiles (usec): 00:26:55.133 | 1.00th=[ 5800], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5932], 00:26:55.133 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:26:55.133 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5997], 00:26:55.133 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 9503], 99.95th=[10159], 00:26:55.133 | 99.99th=[10159] 00:26:55.133 bw ( KiB/s): min=40830, max=43488, per=99.85%, avg=42629.50, stdev=1227.73, samples=4 00:26:55.133 iops : min=10207, max=10872, avg=10657.25, stdev=307.18, samples=4 00:26:55.133 write: IOPS=10.7k, BW=41.6MiB/s (43.6MB/s)(83.5MiB/2005msec); 0 zone resets 00:26:55.133 slat (nsec): min=1378, max=17439, avg=1595.13, stdev=342.52 00:26:55.133 clat (usec): min=2970, max=10166, avg=5941.77, stdev=151.06 00:26:55.133 lat (usec): min=2976, max=10168, avg=5943.36, stdev=151.01 00:26:55.133 clat percentiles (usec): 00:26:55.133 | 1.00th=[ 5866], 5.00th=[ 5866], 10.00th=[ 5932], 20.00th=[ 5932], 00:26:55.133 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:26:55.133 | 70.00th=[ 5932], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 5997], 00:26:55.133 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8717], 00:26:55.133 | 99.99th=[10159] 00:26:55.133 bw ( KiB/s): min=41277, max=43152, per=99.95%, avg=42603.25, stdev=888.24, samples=4 00:26:55.133 iops : min=10319, max=10788, avg=10650.75, stdev=222.19, samples=4 00:26:55.133 lat (msec) : 4=0.03%, 10=99.93%, 20=0.04% 00:26:55.133 cpu : usr=99.65%, sys=0.05%, ctx=16, majf=0, minf=2 00:26:55.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:55.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.133 issued rwts: total=21399,21365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.133 00:26:55.133 Run status group 0 (all jobs): 00:26:55.133 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.6MiB (87.7MB), run=2005-2005msec 00:26:55.133 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.5MiB (87.5MB), run=2005-2005msec 00:26:55.133 17:36:14 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:55.133 17:36:14 -- host/fio.sh@74 -- # sync 00:26:55.133 17:36:14 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:03.258 17:36:21 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:03.258 17:36:22 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:08.534 17:36:27 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:08.534 17:36:27 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:11.072 17:36:30 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:11.072 17:36:30 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:11.072 17:36:30 -- host/fio.sh@86 -- # nvmftestfini 00:27:11.072 17:36:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:11.072 17:36:30 -- nvmf/common.sh@116 -- # sync 00:27:11.072 17:36:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:11.072 17:36:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:11.072 17:36:30 -- nvmf/common.sh@119 -- # set +e 00:27:11.072 17:36:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:11.072 17:36:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:11.332 rmmod nvme_rdma 00:27:11.332 rmmod nvme_fabrics 00:27:11.332 17:36:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:11.332 17:36:30 -- nvmf/common.sh@123 -- # set -e 00:27:11.332 17:36:30 -- nvmf/common.sh@124 -- # return 0 00:27:11.332 17:36:30 -- nvmf/common.sh@477 -- # '[' -n 2820108 ']' 00:27:11.332 17:36:30 -- nvmf/common.sh@478 -- # killprocess 2820108 00:27:11.332 17:36:30 -- common/autotest_common.sh@936 -- # '[' -z 2820108 ']' 00:27:11.332 17:36:30 -- common/autotest_common.sh@940 -- # kill -0 2820108 00:27:11.332 17:36:30 -- common/autotest_common.sh@941 -- # uname 00:27:11.332 17:36:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:11.332 17:36:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2820108 00:27:11.332 17:36:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:11.332 17:36:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:11.332 17:36:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2820108' 00:27:11.332 killing process with pid 2820108 00:27:11.332 17:36:30 -- common/autotest_common.sh@955 -- # kill 2820108 00:27:11.332 17:36:30 -- common/autotest_common.sh@960 -- # wait 2820108 00:27:11.591 17:36:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:11.591 17:36:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:11.591 00:27:11.591 real 0m49.469s 00:27:11.591 user 3m37.666s 00:27:11.591 sys 0m7.213s 00:27:11.591 17:36:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.591 17:36:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.591 ************************************ 00:27:11.591 END TEST nvmf_fio_host 00:27:11.591 ************************************ 00:27:11.591 17:36:31 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:11.591 17:36:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:11.592 17:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.592 17:36:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.592 ************************************ 00:27:11.592 START TEST nvmf_failover 00:27:11.592 ************************************ 00:27:11.592 17:36:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:11.852 * Looking for test storage... 00:27:11.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:11.852 17:36:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:11.852 17:36:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:11.852 17:36:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:11.852 17:36:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:11.852 17:36:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:11.852 17:36:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:11.852 17:36:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:11.852 17:36:31 -- scripts/common.sh@335 -- # IFS=.-: 00:27:11.852 17:36:31 -- scripts/common.sh@335 -- # read -ra ver1 00:27:11.852 17:36:31 -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.852 17:36:31 -- scripts/common.sh@336 -- # read -ra ver2 00:27:11.852 17:36:31 -- scripts/common.sh@337 -- # local 'op=<' 00:27:11.852 17:36:31 -- scripts/common.sh@339 -- # ver1_l=2 00:27:11.852 17:36:31 -- scripts/common.sh@340 -- # ver2_l=1 00:27:11.852 17:36:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:11.852 17:36:31 -- scripts/common.sh@343 -- # case "$op" in 00:27:11.852 17:36:31 -- scripts/common.sh@344 -- # : 1 00:27:11.852 17:36:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:11.852 17:36:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.852 17:36:31 -- scripts/common.sh@364 -- # decimal 1 00:27:11.852 17:36:31 -- scripts/common.sh@352 -- # local d=1 00:27:11.852 17:36:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.852 17:36:31 -- scripts/common.sh@354 -- # echo 1 00:27:11.852 17:36:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:11.852 17:36:31 -- scripts/common.sh@365 -- # decimal 2 00:27:11.852 17:36:31 -- scripts/common.sh@352 -- # local d=2 00:27:11.852 17:36:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.852 17:36:31 -- scripts/common.sh@354 -- # echo 2 00:27:11.852 17:36:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:11.852 17:36:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:11.852 17:36:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:11.852 17:36:31 -- scripts/common.sh@367 -- # return 0 00:27:11.852 17:36:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.852 17:36:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:11.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.852 --rc genhtml_branch_coverage=1 00:27:11.852 --rc genhtml_function_coverage=1 00:27:11.852 --rc genhtml_legend=1 00:27:11.852 --rc geninfo_all_blocks=1 00:27:11.852 --rc geninfo_unexecuted_blocks=1 00:27:11.852 00:27:11.852 ' 00:27:11.852 17:36:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:11.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.852 --rc genhtml_branch_coverage=1 00:27:11.852 --rc genhtml_function_coverage=1 00:27:11.852 --rc genhtml_legend=1 00:27:11.852 --rc geninfo_all_blocks=1 00:27:11.852 --rc geninfo_unexecuted_blocks=1 00:27:11.852 00:27:11.852 ' 00:27:11.852 17:36:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:11.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.852 --rc genhtml_branch_coverage=1 00:27:11.852 --rc genhtml_function_coverage=1 00:27:11.852 --rc genhtml_legend=1 00:27:11.852 --rc geninfo_all_blocks=1 00:27:11.852 --rc geninfo_unexecuted_blocks=1 00:27:11.852 00:27:11.852 ' 00:27:11.852 17:36:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:11.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.852 --rc genhtml_branch_coverage=1 00:27:11.852 --rc genhtml_function_coverage=1 00:27:11.852 --rc genhtml_legend=1 00:27:11.852 --rc geninfo_all_blocks=1 00:27:11.852 --rc geninfo_unexecuted_blocks=1 00:27:11.852 00:27:11.852 ' 00:27:11.852 17:36:31 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.852 17:36:31 -- nvmf/common.sh@7 -- # uname -s 00:27:11.852 17:36:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.852 17:36:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.852 17:36:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.852 17:36:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.852 17:36:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.852 17:36:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.852 17:36:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.852 17:36:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.852 17:36:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.852 17:36:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.852 17:36:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:11.852 17:36:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:11.852 17:36:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.852 17:36:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.852 17:36:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.852 17:36:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:11.852 17:36:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.852 17:36:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.852 17:36:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.852 17:36:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.852 17:36:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.852 17:36:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.852 17:36:31 -- paths/export.sh@5 -- # export PATH 00:27:11.852 17:36:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.852 17:36:31 -- nvmf/common.sh@46 -- # : 0 00:27:11.852 17:36:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:11.852 17:36:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:11.852 17:36:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:11.852 17:36:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.852 17:36:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.852 17:36:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:11.852 17:36:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:11.852 17:36:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:11.852 17:36:31 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:11.852 17:36:31 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:11.852 17:36:31 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:11.852 17:36:31 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:11.852 17:36:31 -- host/failover.sh@18 -- # nvmftestinit 00:27:11.852 17:36:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:11.852 17:36:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.852 17:36:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:11.852 17:36:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:11.852 17:36:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:11.852 17:36:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.852 17:36:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.852 17:36:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.852 17:36:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:11.852 17:36:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:11.852 17:36:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:11.853 17:36:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.430 17:36:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.430 17:36:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.430 17:36:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.430 17:36:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.430 17:36:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.430 17:36:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.430 17:36:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.430 17:36:37 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.430 17:36:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.430 17:36:37 -- nvmf/common.sh@295 -- # e810=() 00:27:18.430 17:36:37 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.430 17:36:37 -- nvmf/common.sh@296 -- # x722=() 00:27:18.430 17:36:37 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.430 17:36:37 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.430 17:36:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.430 17:36:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.430 17:36:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.430 17:36:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.430 17:36:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:18.430 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:18.430 17:36:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:18.430 17:36:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.430 17:36:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:18.430 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:18.430 17:36:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:18.430 17:36:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.430 17:36:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.430 17:36:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.430 17:36:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.430 17:36:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.430 17:36:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:18.430 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:18.430 17:36:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.430 17:36:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.430 17:36:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.430 17:36:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.430 17:36:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:18.430 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:18.430 17:36:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.430 17:36:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.430 17:36:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.430 17:36:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:18.430 17:36:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:18.431 17:36:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:18.431 17:36:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:18.431 17:36:37 -- nvmf/common.sh@57 -- # uname 00:27:18.431 17:36:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:18.431 17:36:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:18.431 17:36:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:18.431 17:36:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:18.431 17:36:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:18.431 17:36:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:18.431 17:36:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:18.431 17:36:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:18.431 17:36:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:18.431 17:36:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:18.431 17:36:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:18.431 17:36:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:18.431 17:36:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:18.431 17:36:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:18.431 17:36:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:18.431 17:36:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:18.431 17:36:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:18.431 17:36:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:18.431 17:36:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:18.431 17:36:37 -- nvmf/common.sh@104 -- # continue 2 00:27:18.431 17:36:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:18.431 17:36:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:18.431 17:36:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:18.431 17:36:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:18.431 17:36:37 -- nvmf/common.sh@104 -- # continue 2 00:27:18.431 17:36:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:18.431 17:36:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:18.431 17:36:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:18.431 17:36:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:18.431 17:36:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:18.431 17:36:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:18.431 17:36:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:18.431 17:36:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:18.431 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:18.431 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:18.431 altname enp217s0f0np0 00:27:18.431 altname ens818f0np0 00:27:18.431 inet 192.168.100.8/24 scope global mlx_0_0 00:27:18.431 valid_lft forever preferred_lft forever 00:27:18.431 17:36:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:18.431 17:36:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:18.431 17:36:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:18.431 17:36:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:18.431 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:18.431 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:18.431 altname enp217s0f1np1 00:27:18.431 altname ens818f1np1 00:27:18.431 inet 192.168.100.9/24 scope global mlx_0_1 00:27:18.431 valid_lft forever preferred_lft forever 00:27:18.431 17:36:38 -- nvmf/common.sh@410 -- # return 0 00:27:18.431 17:36:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.431 17:36:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:18.431 17:36:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:18.431 17:36:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:18.431 17:36:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:18.431 17:36:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:18.431 17:36:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:18.431 17:36:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:18.431 17:36:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:18.431 17:36:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:18.431 17:36:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:18.431 17:36:38 -- nvmf/common.sh@104 -- # continue 2 00:27:18.431 17:36:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:18.431 17:36:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:18.431 17:36:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:18.431 17:36:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@104 -- # continue 2 00:27:18.431 17:36:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:18.431 17:36:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:18.431 17:36:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:18.431 17:36:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:18.431 17:36:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:18.431 17:36:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:18.431 17:36:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:18.431 192.168.100.9' 00:27:18.431 17:36:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:18.431 192.168.100.9' 00:27:18.431 17:36:38 -- nvmf/common.sh@445 -- # head -n 1 00:27:18.431 17:36:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:18.431 17:36:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:18.431 192.168.100.9' 00:27:18.431 17:36:38 -- nvmf/common.sh@446 -- # head -n 1 00:27:18.431 17:36:38 -- nvmf/common.sh@446 -- # tail -n +2 00:27:18.431 17:36:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:18.431 17:36:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:18.431 17:36:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:18.431 17:36:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:18.431 17:36:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:18.431 17:36:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:18.431 17:36:38 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:18.431 17:36:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:18.431 17:36:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.431 17:36:38 -- common/autotest_common.sh@10 -- # set +x 00:27:18.431 17:36:38 -- nvmf/common.sh@469 -- # nvmfpid=2831712 00:27:18.431 17:36:38 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:18.431 17:36:38 -- nvmf/common.sh@470 -- # waitforlisten 2831712 00:27:18.431 17:36:38 -- common/autotest_common.sh@829 -- # '[' -z 2831712 ']' 00:27:18.431 17:36:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.431 17:36:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.431 17:36:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.431 17:36:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.431 17:36:38 -- common/autotest_common.sh@10 -- # set +x 00:27:18.691 [2024-11-09 17:36:38.202694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:18.691 [2024-11-09 17:36:38.202753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.691 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.691 [2024-11-09 17:36:38.273988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.691 [2024-11-09 17:36:38.345480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.691 [2024-11-09 17:36:38.345591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.691 [2024-11-09 17:36:38.345603] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.691 [2024-11-09 17:36:38.345612] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.691 [2024-11-09 17:36:38.345712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.691 [2024-11-09 17:36:38.345798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.691 [2024-11-09 17:36:38.345800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.629 17:36:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.629 17:36:39 -- common/autotest_common.sh@862 -- # return 0 00:27:19.629 17:36:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:19.629 17:36:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:19.629 17:36:39 -- common/autotest_common.sh@10 -- # set +x 00:27:19.629 17:36:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.629 17:36:39 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:19.629 [2024-11-09 17:36:39.251348] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfda860/0xfded50) succeed. 00:27:19.629 [2024-11-09 17:36:39.260308] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfdbdb0/0x10203f0) succeed. 00:27:19.629 17:36:39 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:19.889 Malloc0 00:27:19.889 17:36:39 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.147 17:36:39 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.406 17:36:39 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:20.406 [2024-11-09 17:36:40.115267] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:20.406 17:36:40 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:20.664 [2024-11-09 17:36:40.307613] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:20.665 17:36:40 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:20.924 [2024-11-09 17:36:40.500299] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:20.924 17:36:40 -- host/failover.sh@31 -- # bdevperf_pid=2832266 00:27:20.924 17:36:40 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:20.924 17:36:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.924 17:36:40 -- host/failover.sh@34 -- # waitforlisten 2832266 /var/tmp/bdevperf.sock 00:27:20.924 17:36:40 -- common/autotest_common.sh@829 -- # '[' -z 2832266 ']' 00:27:20.924 17:36:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.924 17:36:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.924 17:36:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.924 17:36:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.924 17:36:40 -- common/autotest_common.sh@10 -- # set +x 00:27:21.862 17:36:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.862 17:36:41 -- common/autotest_common.sh@862 -- # return 0 00:27:21.862 17:36:41 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:22.122 NVMe0n1 00:27:22.122 17:36:41 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:22.122 00:27:22.381 17:36:41 -- host/failover.sh@39 -- # run_test_pid=2832483 00:27:22.381 17:36:41 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:22.381 17:36:41 -- host/failover.sh@41 -- # sleep 1 00:27:23.320 17:36:42 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:23.580 17:36:43 -- host/failover.sh@45 -- # sleep 3 00:27:26.991 17:36:46 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:26.991 00:27:26.991 17:36:46 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:26.991 17:36:46 -- host/failover.sh@50 -- # sleep 3 00:27:30.283 17:36:49 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:30.283 [2024-11-09 17:36:49.712733] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:30.283 17:36:49 -- host/failover.sh@55 -- # sleep 1 00:27:31.220 17:36:50 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:31.220 17:36:50 -- host/failover.sh@59 -- # wait 2832483 00:27:37.800 0 00:27:37.800 17:36:57 -- host/failover.sh@61 -- # killprocess 2832266 00:27:37.800 17:36:57 -- common/autotest_common.sh@936 -- # '[' -z 2832266 ']' 00:27:37.800 17:36:57 -- common/autotest_common.sh@940 -- # kill -0 2832266 00:27:37.800 17:36:57 -- common/autotest_common.sh@941 -- # uname 00:27:37.800 17:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.800 17:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2832266 00:27:37.800 17:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:37.800 17:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:37.800 17:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2832266' 00:27:37.800 killing process with pid 2832266 00:27:37.800 17:36:57 -- common/autotest_common.sh@955 -- # kill 2832266 00:27:37.800 17:36:57 -- common/autotest_common.sh@960 -- # wait 2832266 00:27:37.800 17:36:57 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.800 [2024-11-09 17:36:40.573330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:37.800 [2024-11-09 17:36:40.573385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832266 ] 00:27:37.800 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.800 [2024-11-09 17:36:40.644569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.800 [2024-11-09 17:36:40.713093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.800 Running I/O for 15 seconds... 00:27:37.800 [2024-11-09 17:36:44.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183b00 00:27:37.800 [2024-11-09 17:36:44.086185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183b00 00:27:37.800 [2024-11-09 17:36:44.086225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183b00 00:27:37.800 [2024-11-09 17:36:44.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183b00 00:27:37.800 [2024-11-09 17:36:44.086333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182700 00:27:37.800 [2024-11-09 17:36:44.086353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.800 [2024-11-09 17:36:44.086383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.800 [2024-11-09 17:36:44.086392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.086828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.801 [2024-11-09 17:36:44.086964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.086984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.086996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.087005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.087015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183b00 00:27:37.801 [2024-11-09 17:36:44.087025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.801 [2024-11-09 17:36:44.087035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182700 00:27:37.801 [2024-11-09 17:36:44.087044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183b00 00:27:37.802 [2024-11-09 17:36:44.087668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182700 00:27:37.802 [2024-11-09 17:36:44.087769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.802 [2024-11-09 17:36:44.087789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.802 [2024-11-09 17:36:44.087800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.087809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.087829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.087849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.087888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.087908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.087928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.087949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.087969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.087980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.087990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.803 [2024-11-09 17:36:44.088316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183b00 00:27:37.803 [2024-11-09 17:36:44.088522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182700 00:27:37.803 [2024-11-09 17:36:44.088541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.803 [2024-11-09 17:36:44.088552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:44.088561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.088571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:44.088582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.088592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:44.088601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.088613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:44.088622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182700 00:27:37.804 [2024-11-09 17:36:44.088643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.090616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.804 [2024-11-09 17:36:44.090631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.804 [2024-11-09 17:36:44.090640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:27:37.804 [2024-11-09 17:36:44.090652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:44.090695] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:37.804 [2024-11-09 17:36:44.090712] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:37.804 [2024-11-09 17:36:44.090723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.804 [2024-11-09 17:36:44.092569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.804 [2024-11-09 17:36:44.106979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:37.804 [2024-11-09 17:36:44.135673] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:37.804 [2024-11-09 17:36:47.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.518849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.804 [2024-11-09 17:36:47.518978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183b00 00:27:37.804 [2024-11-09 17:36:47.518998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.519008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x184300 00:27:37.804 [2024-11-09 17:36:47.519017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.804 [2024-11-09 17:36:47.519028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x184300 00:27:37.805 [2024-11-09 17:36:47.519636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.805 [2024-11-09 17:36:47.519655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.805 [2024-11-09 17:36:47.519705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183b00 00:27:37.805 [2024-11-09 17:36:47.519714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.519733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.519753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.519773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.519832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.519872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.519892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.519970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.519980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.519989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.520009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.520029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.520129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.520304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183b00 00:27:37.806 [2024-11-09 17:36:47.520403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.806 [2024-11-09 17:36:47.520444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.806 [2024-11-09 17:36:47.520459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x184300 00:27:37.806 [2024-11-09 17:36:47.520469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:47.520806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.520964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.520983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.520993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.521002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.521014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:47.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.521034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x184300 00:27:37.807 [2024-11-09 17:36:47.521043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.522879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.807 [2024-11-09 17:36:47.522893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.807 [2024-11-09 17:36:47.522902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54096 len:8 PRP1 0x0 PRP2 0x0 00:27:37.807 [2024-11-09 17:36:47.522913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:47.522955] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:37.807 [2024-11-09 17:36:47.522967] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:37.807 [2024-11-09 17:36:47.522977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.807 [2024-11-09 17:36:47.524687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.807 [2024-11-09 17:36:47.538933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:37.807 [2024-11-09 17:36:47.570219] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:37.807 [2024-11-09 17:36:51.911419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183b00 00:27:37.807 [2024-11-09 17:36:51.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:51.911483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:51.911493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:51.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182700 00:27:37.807 [2024-11-09 17:36:51.911515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:51.911525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.807 [2024-11-09 17:36:51.911535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.807 [2024-11-09 17:36:51.911546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182700 00:27:37.807 [2024-11-09 17:36:51.911555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.911620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.911642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.911849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.911891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.911954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.911984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.911993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.912033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.912053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.912171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.912190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.808 [2024-11-09 17:36:51.912210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183b00 00:27:37.808 [2024-11-09 17:36:51.912249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182700 00:27:37.808 [2024-11-09 17:36:51.912269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.808 [2024-11-09 17:36:51.912279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183b00 00:27:37.809 [2024-11-09 17:36:51.912892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.809 [2024-11-09 17:36:51.912912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.809 [2024-11-09 17:36:51.912922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:27:37.809 [2024-11-09 17:36:51.912932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.912942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.912951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.912962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.912972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.912982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.912993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182700 00:27:37.810 [2024-11-09 17:36:51.913636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183b00 00:27:37.810 [2024-11-09 17:36:51.913656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.810 [2024-11-09 17:36:51.913666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.810 [2024-11-09 17:36:51.913675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183b00 00:27:37.811 [2024-11-09 17:36:51.913815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183b00 00:27:37.811 [2024-11-09 17:36:51.913913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.913974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.913985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182700 00:27:37.811 [2024-11-09 17:36:51.913994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.811 [2024-11-09 17:36:51.914013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2f4a7000 sqhd:5310 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.915890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.811 [2024-11-09 17:36:51.915904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.811 [2024-11-09 17:36:51.915913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:27:37.811 [2024-11-09 17:36:51.915922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.811 [2024-11-09 17:36:51.915966] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:37.811 [2024-11-09 17:36:51.915978] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:37.811 [2024-11-09 17:36:51.915989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.811 [2024-11-09 17:36:51.917838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.811 [2024-11-09 17:36:51.931662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:37.811 [2024-11-09 17:36:51.962989] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:37.811 00:27:37.811 Latency(us) 00:27:37.811 [2024-11-09T16:36:57.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.811 [2024-11-09T16:36:57.581Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:37.811 Verification LBA range: start 0x0 length 0x4000 00:27:37.811 NVMe0n1 : 15.00 20222.17 78.99 266.93 0.00 6233.93 439.09 1020054.73 00:27:37.811 [2024-11-09T16:36:57.581Z] =================================================================================================================== 00:27:37.811 [2024-11-09T16:36:57.581Z] Total : 20222.17 78.99 266.93 0.00 6233.93 439.09 1020054.73 00:27:37.811 Received shutdown signal, test time was about 15.000000 seconds 00:27:37.811 00:27:37.811 Latency(us) 00:27:37.811 [2024-11-09T16:36:57.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.811 [2024-11-09T16:36:57.581Z] =================================================================================================================== 00:27:37.811 [2024-11-09T16:36:57.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.811 17:36:57 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:37.811 17:36:57 -- host/failover.sh@65 -- # count=3 00:27:37.811 17:36:57 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:37.811 17:36:57 -- host/failover.sh@73 -- # bdevperf_pid=2834995 00:27:37.811 17:36:57 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:37.811 17:36:57 -- host/failover.sh@75 -- # waitforlisten 2834995 /var/tmp/bdevperf.sock 00:27:37.811 17:36:57 -- common/autotest_common.sh@829 -- # '[' -z 2834995 ']' 00:27:37.811 17:36:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:37.811 17:36:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.811 17:36:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:37.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:37.811 17:36:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.811 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:27:38.750 17:36:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.750 17:36:58 -- common/autotest_common.sh@862 -- # return 0 00:27:38.750 17:36:58 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:38.751 [2024-11-09 17:36:58.370367] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:38.751 17:36:58 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:39.010 [2024-11-09 17:36:58.558992] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:39.010 17:36:58 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.269 NVMe0n1 00:27:39.269 17:36:58 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.529 00:27:39.529 17:36:59 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.788 00:27:39.788 17:36:59 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:39.788 17:36:59 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:39.788 17:36:59 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:40.047 17:36:59 -- host/failover.sh@87 -- # sleep 3 00:27:43.336 17:37:02 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:43.336 17:37:02 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:43.336 17:37:02 -- host/failover.sh@90 -- # run_test_pid=2836072 00:27:43.336 17:37:02 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:43.336 17:37:02 -- host/failover.sh@92 -- # wait 2836072 00:27:44.274 0 00:27:44.274 17:37:04 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.534 [2024-11-09 17:36:57.397233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.534 [2024-11-09 17:36:57.397289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834995 ] 00:27:44.534 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.534 [2024-11-09 17:36:57.465979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.534 [2024-11-09 17:36:57.528832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.534 [2024-11-09 17:36:59.699657] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:44.534 [2024-11-09 17:36:59.700314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.534 [2024-11-09 17:36:59.700341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.534 [2024-11-09 17:36:59.724794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.534 [2024-11-09 17:36:59.740490] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.534 Running I/O for 1 seconds... 00:27:44.534 00:27:44.534 Latency(us) 00:27:44.534 [2024-11-09T16:37:04.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.534 [2024-11-09T16:37:04.304Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:44.534 Verification LBA range: start 0x0 length 0x4000 00:27:44.534 NVMe0n1 : 1.00 25453.85 99.43 0.00 0.00 5005.23 1264.84 11114.91 00:27:44.534 [2024-11-09T16:37:04.304Z] =================================================================================================================== 00:27:44.534 [2024-11-09T16:37:04.304Z] Total : 25453.85 99.43 0.00 0.00 5005.23 1264.84 11114.91 00:27:44.534 17:37:04 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:44.534 17:37:04 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:44.534 17:37:04 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:44.794 17:37:04 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:44.794 17:37:04 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:45.053 17:37:04 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.312 17:37:04 -- host/failover.sh@101 -- # sleep 3 00:27:48.604 17:37:07 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:48.604 17:37:07 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:48.604 17:37:08 -- host/failover.sh@108 -- # killprocess 2834995 00:27:48.604 17:37:08 -- common/autotest_common.sh@936 -- # '[' -z 2834995 ']' 00:27:48.604 17:37:08 -- common/autotest_common.sh@940 -- # kill -0 2834995 00:27:48.604 17:37:08 -- common/autotest_common.sh@941 -- # uname 00:27:48.604 17:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:48.604 17:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2834995 00:27:48.604 17:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:48.604 17:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:48.604 17:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2834995' 00:27:48.604 killing process with pid 2834995 00:27:48.604 17:37:08 -- common/autotest_common.sh@955 -- # kill 2834995 00:27:48.604 17:37:08 -- common/autotest_common.sh@960 -- # wait 2834995 00:27:48.604 17:37:08 -- host/failover.sh@110 -- # sync 00:27:48.604 17:37:08 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.863 17:37:08 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:48.863 17:37:08 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:48.863 17:37:08 -- host/failover.sh@116 -- # nvmftestfini 00:27:48.863 17:37:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:48.863 17:37:08 -- nvmf/common.sh@116 -- # sync 00:27:48.863 17:37:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:48.863 17:37:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:48.863 17:37:08 -- nvmf/common.sh@119 -- # set +e 00:27:48.863 17:37:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:48.863 17:37:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:48.863 rmmod nvme_rdma 00:27:48.863 rmmod nvme_fabrics 00:27:48.863 17:37:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:48.863 17:37:08 -- nvmf/common.sh@123 -- # set -e 00:27:48.863 17:37:08 -- nvmf/common.sh@124 -- # return 0 00:27:48.863 17:37:08 -- nvmf/common.sh@477 -- # '[' -n 2831712 ']' 00:27:48.863 17:37:08 -- nvmf/common.sh@478 -- # killprocess 2831712 00:27:48.863 17:37:08 -- common/autotest_common.sh@936 -- # '[' -z 2831712 ']' 00:27:48.863 17:37:08 -- common/autotest_common.sh@940 -- # kill -0 2831712 00:27:48.863 17:37:08 -- common/autotest_common.sh@941 -- # uname 00:27:48.863 17:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:48.864 17:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2831712 00:27:48.864 17:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:48.864 17:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:48.864 17:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2831712' 00:27:48.864 killing process with pid 2831712 00:27:48.864 17:37:08 -- common/autotest_common.sh@955 -- # kill 2831712 00:27:48.864 17:37:08 -- common/autotest_common.sh@960 -- # wait 2831712 00:27:49.123 17:37:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:49.123 17:37:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:49.123 00:27:49.123 real 0m37.569s 00:27:49.123 user 2m4.966s 00:27:49.123 sys 0m7.438s 00:27:49.123 17:37:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.123 17:37:08 -- common/autotest_common.sh@10 -- # set +x 00:27:49.123 ************************************ 00:27:49.123 END TEST nvmf_failover 00:27:49.123 ************************************ 00:27:49.384 17:37:08 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:49.384 17:37:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.384 17:37:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.384 17:37:08 -- common/autotest_common.sh@10 -- # set +x 00:27:49.384 ************************************ 00:27:49.384 START TEST nvmf_discovery 00:27:49.384 ************************************ 00:27:49.384 17:37:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:49.384 * Looking for test storage... 00:27:49.384 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:49.384 17:37:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:49.384 17:37:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:49.384 17:37:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:49.384 17:37:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:49.384 17:37:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:49.384 17:37:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:49.384 17:37:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:49.384 17:37:09 -- scripts/common.sh@335 -- # IFS=.-: 00:27:49.384 17:37:09 -- scripts/common.sh@335 -- # read -ra ver1 00:27:49.384 17:37:09 -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.384 17:37:09 -- scripts/common.sh@336 -- # read -ra ver2 00:27:49.384 17:37:09 -- scripts/common.sh@337 -- # local 'op=<' 00:27:49.384 17:37:09 -- scripts/common.sh@339 -- # ver1_l=2 00:27:49.384 17:37:09 -- scripts/common.sh@340 -- # ver2_l=1 00:27:49.384 17:37:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:49.384 17:37:09 -- scripts/common.sh@343 -- # case "$op" in 00:27:49.384 17:37:09 -- scripts/common.sh@344 -- # : 1 00:27:49.384 17:37:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:49.384 17:37:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.384 17:37:09 -- scripts/common.sh@364 -- # decimal 1 00:27:49.384 17:37:09 -- scripts/common.sh@352 -- # local d=1 00:27:49.384 17:37:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.384 17:37:09 -- scripts/common.sh@354 -- # echo 1 00:27:49.384 17:37:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:49.384 17:37:09 -- scripts/common.sh@365 -- # decimal 2 00:27:49.385 17:37:09 -- scripts/common.sh@352 -- # local d=2 00:27:49.385 17:37:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.385 17:37:09 -- scripts/common.sh@354 -- # echo 2 00:27:49.385 17:37:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:49.385 17:37:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:49.385 17:37:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:49.385 17:37:09 -- scripts/common.sh@367 -- # return 0 00:27:49.385 17:37:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.385 17:37:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.385 --rc genhtml_branch_coverage=1 00:27:49.385 --rc genhtml_function_coverage=1 00:27:49.385 --rc genhtml_legend=1 00:27:49.385 --rc geninfo_all_blocks=1 00:27:49.385 --rc geninfo_unexecuted_blocks=1 00:27:49.385 00:27:49.385 ' 00:27:49.385 17:37:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.385 --rc genhtml_branch_coverage=1 00:27:49.385 --rc genhtml_function_coverage=1 00:27:49.385 --rc genhtml_legend=1 00:27:49.385 --rc geninfo_all_blocks=1 00:27:49.385 --rc geninfo_unexecuted_blocks=1 00:27:49.385 00:27:49.385 ' 00:27:49.385 17:37:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.385 --rc genhtml_branch_coverage=1 00:27:49.385 --rc genhtml_function_coverage=1 00:27:49.385 --rc genhtml_legend=1 00:27:49.385 --rc geninfo_all_blocks=1 00:27:49.385 --rc geninfo_unexecuted_blocks=1 00:27:49.385 00:27:49.385 ' 00:27:49.385 17:37:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.385 --rc genhtml_branch_coverage=1 00:27:49.385 --rc genhtml_function_coverage=1 00:27:49.385 --rc genhtml_legend=1 00:27:49.385 --rc geninfo_all_blocks=1 00:27:49.385 --rc geninfo_unexecuted_blocks=1 00:27:49.385 00:27:49.385 ' 00:27:49.385 17:37:09 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.385 17:37:09 -- nvmf/common.sh@7 -- # uname -s 00:27:49.385 17:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.385 17:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.385 17:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.385 17:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.385 17:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.385 17:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.385 17:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.385 17:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.385 17:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.385 17:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.385 17:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:49.385 17:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:49.385 17:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.385 17:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.385 17:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.385 17:37:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:49.385 17:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.385 17:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.385 17:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.385 17:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.385 17:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.385 17:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.385 17:37:09 -- paths/export.sh@5 -- # export PATH 00:27:49.385 17:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.385 17:37:09 -- nvmf/common.sh@46 -- # : 0 00:27:49.385 17:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:49.385 17:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:49.385 17:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:49.385 17:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.385 17:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.385 17:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:49.385 17:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:49.385 17:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:49.385 17:37:09 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:27:49.385 17:37:09 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:49.385 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:49.385 17:37:09 -- host/discovery.sh@13 -- # exit 0 00:27:49.385 00:27:49.385 real 0m0.169s 00:27:49.386 user 0m0.092s 00:27:49.386 sys 0m0.086s 00:27:49.386 17:37:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.386 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:27:49.386 ************************************ 00:27:49.386 END TEST nvmf_discovery 00:27:49.386 ************************************ 00:27:49.386 17:37:09 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:49.386 17:37:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.386 17:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.386 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:27:49.386 ************************************ 00:27:49.386 START TEST nvmf_discovery_remove_ifc 00:27:49.386 ************************************ 00:27:49.386 17:37:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:49.647 * Looking for test storage... 00:27:49.647 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:49.647 17:37:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:49.647 17:37:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:49.647 17:37:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:49.647 17:37:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:49.647 17:37:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:49.647 17:37:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:49.647 17:37:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:49.647 17:37:09 -- scripts/common.sh@335 -- # IFS=.-: 00:27:49.647 17:37:09 -- scripts/common.sh@335 -- # read -ra ver1 00:27:49.647 17:37:09 -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.647 17:37:09 -- scripts/common.sh@336 -- # read -ra ver2 00:27:49.647 17:37:09 -- scripts/common.sh@337 -- # local 'op=<' 00:27:49.647 17:37:09 -- scripts/common.sh@339 -- # ver1_l=2 00:27:49.647 17:37:09 -- scripts/common.sh@340 -- # ver2_l=1 00:27:49.647 17:37:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:49.647 17:37:09 -- scripts/common.sh@343 -- # case "$op" in 00:27:49.647 17:37:09 -- scripts/common.sh@344 -- # : 1 00:27:49.647 17:37:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:49.647 17:37:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.647 17:37:09 -- scripts/common.sh@364 -- # decimal 1 00:27:49.647 17:37:09 -- scripts/common.sh@352 -- # local d=1 00:27:49.647 17:37:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.647 17:37:09 -- scripts/common.sh@354 -- # echo 1 00:27:49.647 17:37:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:49.647 17:37:09 -- scripts/common.sh@365 -- # decimal 2 00:27:49.647 17:37:09 -- scripts/common.sh@352 -- # local d=2 00:27:49.647 17:37:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.647 17:37:09 -- scripts/common.sh@354 -- # echo 2 00:27:49.647 17:37:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:49.647 17:37:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:49.647 17:37:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:49.647 17:37:09 -- scripts/common.sh@367 -- # return 0 00:27:49.647 17:37:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.647 17:37:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:49.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.647 --rc genhtml_branch_coverage=1 00:27:49.647 --rc genhtml_function_coverage=1 00:27:49.647 --rc genhtml_legend=1 00:27:49.647 --rc geninfo_all_blocks=1 00:27:49.647 --rc geninfo_unexecuted_blocks=1 00:27:49.647 00:27:49.647 ' 00:27:49.647 17:37:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:49.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.647 --rc genhtml_branch_coverage=1 00:27:49.647 --rc genhtml_function_coverage=1 00:27:49.647 --rc genhtml_legend=1 00:27:49.647 --rc geninfo_all_blocks=1 00:27:49.647 --rc geninfo_unexecuted_blocks=1 00:27:49.647 00:27:49.647 ' 00:27:49.647 17:37:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:49.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.647 --rc genhtml_branch_coverage=1 00:27:49.647 --rc genhtml_function_coverage=1 00:27:49.647 --rc genhtml_legend=1 00:27:49.647 --rc geninfo_all_blocks=1 00:27:49.647 --rc geninfo_unexecuted_blocks=1 00:27:49.647 00:27:49.647 ' 00:27:49.647 17:37:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:49.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.647 --rc genhtml_branch_coverage=1 00:27:49.647 --rc genhtml_function_coverage=1 00:27:49.647 --rc genhtml_legend=1 00:27:49.647 --rc geninfo_all_blocks=1 00:27:49.647 --rc geninfo_unexecuted_blocks=1 00:27:49.647 00:27:49.647 ' 00:27:49.647 17:37:09 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.647 17:37:09 -- nvmf/common.sh@7 -- # uname -s 00:27:49.647 17:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.647 17:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.647 17:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.647 17:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.647 17:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.647 17:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.648 17:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.648 17:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.648 17:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.648 17:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.648 17:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:49.648 17:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:49.648 17:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.648 17:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.648 17:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.648 17:37:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:49.648 17:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.648 17:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.648 17:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.648 17:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.648 17:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.648 17:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.648 17:37:09 -- paths/export.sh@5 -- # export PATH 00:27:49.648 17:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.648 17:37:09 -- nvmf/common.sh@46 -- # : 0 00:27:49.648 17:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:49.648 17:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:49.648 17:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:49.648 17:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.648 17:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.648 17:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:49.648 17:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:49.648 17:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:49.648 17:37:09 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:27:49.648 17:37:09 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:49.648 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:49.648 17:37:09 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:27:49.648 00:27:49.648 real 0m0.202s 00:27:49.648 user 0m0.130s 00:27:49.648 sys 0m0.089s 00:27:49.648 17:37:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:49.648 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:27:49.648 ************************************ 00:27:49.648 END TEST nvmf_discovery_remove_ifc 00:27:49.648 ************************************ 00:27:49.648 17:37:09 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:27:49.648 17:37:09 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:49.648 17:37:09 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:27:49.648 17:37:09 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:27:49.648 17:37:09 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:49.648 17:37:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.648 17:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.648 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:27:49.648 ************************************ 00:27:49.648 START TEST nvmf_bdevperf 00:27:49.648 ************************************ 00:27:49.648 17:37:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:49.908 * Looking for test storage... 00:27:49.908 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:49.908 17:37:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:49.908 17:37:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:49.909 17:37:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:49.909 17:37:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:49.909 17:37:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:49.909 17:37:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:49.909 17:37:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:49.909 17:37:09 -- scripts/common.sh@335 -- # IFS=.-: 00:27:49.909 17:37:09 -- scripts/common.sh@335 -- # read -ra ver1 00:27:49.909 17:37:09 -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.909 17:37:09 -- scripts/common.sh@336 -- # read -ra ver2 00:27:49.909 17:37:09 -- scripts/common.sh@337 -- # local 'op=<' 00:27:49.909 17:37:09 -- scripts/common.sh@339 -- # ver1_l=2 00:27:49.909 17:37:09 -- scripts/common.sh@340 -- # ver2_l=1 00:27:49.909 17:37:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:49.909 17:37:09 -- scripts/common.sh@343 -- # case "$op" in 00:27:49.909 17:37:09 -- scripts/common.sh@344 -- # : 1 00:27:49.909 17:37:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:49.909 17:37:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.909 17:37:09 -- scripts/common.sh@364 -- # decimal 1 00:27:49.909 17:37:09 -- scripts/common.sh@352 -- # local d=1 00:27:49.909 17:37:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.909 17:37:09 -- scripts/common.sh@354 -- # echo 1 00:27:49.909 17:37:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:49.909 17:37:09 -- scripts/common.sh@365 -- # decimal 2 00:27:49.909 17:37:09 -- scripts/common.sh@352 -- # local d=2 00:27:49.909 17:37:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.909 17:37:09 -- scripts/common.sh@354 -- # echo 2 00:27:49.909 17:37:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:49.909 17:37:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:49.909 17:37:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:49.909 17:37:09 -- scripts/common.sh@367 -- # return 0 00:27:49.909 17:37:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.909 17:37:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.909 --rc genhtml_branch_coverage=1 00:27:49.909 --rc genhtml_function_coverage=1 00:27:49.909 --rc genhtml_legend=1 00:27:49.909 --rc geninfo_all_blocks=1 00:27:49.909 --rc geninfo_unexecuted_blocks=1 00:27:49.909 00:27:49.909 ' 00:27:49.909 17:37:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.909 --rc genhtml_branch_coverage=1 00:27:49.909 --rc genhtml_function_coverage=1 00:27:49.909 --rc genhtml_legend=1 00:27:49.909 --rc geninfo_all_blocks=1 00:27:49.909 --rc geninfo_unexecuted_blocks=1 00:27:49.909 00:27:49.909 ' 00:27:49.909 17:37:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.909 --rc genhtml_branch_coverage=1 00:27:49.909 --rc genhtml_function_coverage=1 00:27:49.909 --rc genhtml_legend=1 00:27:49.909 --rc geninfo_all_blocks=1 00:27:49.909 --rc geninfo_unexecuted_blocks=1 00:27:49.909 00:27:49.909 ' 00:27:49.909 17:37:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.909 --rc genhtml_branch_coverage=1 00:27:49.909 --rc genhtml_function_coverage=1 00:27:49.909 --rc genhtml_legend=1 00:27:49.909 --rc geninfo_all_blocks=1 00:27:49.909 --rc geninfo_unexecuted_blocks=1 00:27:49.909 00:27:49.909 ' 00:27:49.909 17:37:09 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.909 17:37:09 -- nvmf/common.sh@7 -- # uname -s 00:27:49.909 17:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.909 17:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.909 17:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.909 17:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.909 17:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.909 17:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.909 17:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.909 17:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.909 17:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.909 17:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.909 17:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:49.909 17:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:49.909 17:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.909 17:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.909 17:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.909 17:37:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:49.909 17:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.909 17:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.909 17:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.909 17:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.909 17:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.909 17:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.909 17:37:09 -- paths/export.sh@5 -- # export PATH 00:27:49.909 17:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.909 17:37:09 -- nvmf/common.sh@46 -- # : 0 00:27:49.909 17:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:49.909 17:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:49.909 17:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:49.909 17:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.909 17:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.909 17:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:49.909 17:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:49.909 17:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:49.909 17:37:09 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.909 17:37:09 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.909 17:37:09 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:49.909 17:37:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:49.909 17:37:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.909 17:37:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:49.909 17:37:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:49.910 17:37:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:49.910 17:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.910 17:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.910 17:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.910 17:37:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:49.910 17:37:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:49.910 17:37:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:49.910 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.485 17:37:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:56.485 17:37:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:56.485 17:37:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:56.485 17:37:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:56.485 17:37:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:56.485 17:37:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:56.485 17:37:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:56.485 17:37:15 -- nvmf/common.sh@294 -- # net_devs=() 00:27:56.485 17:37:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:56.485 17:37:15 -- nvmf/common.sh@295 -- # e810=() 00:27:56.485 17:37:15 -- nvmf/common.sh@295 -- # local -ga e810 00:27:56.485 17:37:15 -- nvmf/common.sh@296 -- # x722=() 00:27:56.485 17:37:15 -- nvmf/common.sh@296 -- # local -ga x722 00:27:56.485 17:37:15 -- nvmf/common.sh@297 -- # mlx=() 00:27:56.485 17:37:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:56.485 17:37:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.485 17:37:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:56.485 17:37:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.485 17:37:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:56.485 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:56.485 17:37:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:56.485 17:37:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.485 17:37:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:56.485 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:56.485 17:37:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:56.485 17:37:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:56.485 17:37:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:56.485 17:37:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.485 17:37:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.485 17:37:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.485 17:37:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.485 17:37:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:56.485 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:56.485 17:37:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.485 17:37:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.485 17:37:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.485 17:37:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.485 17:37:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:56.485 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:56.485 17:37:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.485 17:37:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:56.485 17:37:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:56.486 17:37:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:56.486 17:37:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:56.486 17:37:15 -- nvmf/common.sh@57 -- # uname 00:27:56.486 17:37:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:56.486 17:37:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:56.486 17:37:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:56.486 17:37:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:56.486 17:37:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:56.486 17:37:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:56.486 17:37:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:56.486 17:37:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:56.486 17:37:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:56.486 17:37:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:56.486 17:37:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:56.486 17:37:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:56.486 17:37:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:56.486 17:37:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:56.486 17:37:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:56.486 17:37:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:56.486 17:37:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@104 -- # continue 2 00:27:56.486 17:37:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@104 -- # continue 2 00:27:56.486 17:37:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:56.486 17:37:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.486 17:37:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:56.486 17:37:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:56.486 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:56.486 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:56.486 altname enp217s0f0np0 00:27:56.486 altname ens818f0np0 00:27:56.486 inet 192.168.100.8/24 scope global mlx_0_0 00:27:56.486 valid_lft forever preferred_lft forever 00:27:56.486 17:37:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:56.486 17:37:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.486 17:37:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:56.486 17:37:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:56.486 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:56.486 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:56.486 altname enp217s0f1np1 00:27:56.486 altname ens818f1np1 00:27:56.486 inet 192.168.100.9/24 scope global mlx_0_1 00:27:56.486 valid_lft forever preferred_lft forever 00:27:56.486 17:37:15 -- nvmf/common.sh@410 -- # return 0 00:27:56.486 17:37:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:56.486 17:37:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:56.486 17:37:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:56.486 17:37:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:56.486 17:37:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:56.486 17:37:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:56.486 17:37:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:56.486 17:37:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:56.486 17:37:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:56.486 17:37:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@104 -- # continue 2 00:27:56.486 17:37:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:56.486 17:37:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:56.486 17:37:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@104 -- # continue 2 00:27:56.486 17:37:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:56.486 17:37:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.486 17:37:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:56.486 17:37:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:56.486 17:37:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:56.486 17:37:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:56.486 192.168.100.9' 00:27:56.486 17:37:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:56.486 192.168.100.9' 00:27:56.486 17:37:15 -- nvmf/common.sh@445 -- # head -n 1 00:27:56.486 17:37:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:56.486 17:37:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:56.486 192.168.100.9' 00:27:56.486 17:37:15 -- nvmf/common.sh@446 -- # tail -n +2 00:27:56.486 17:37:15 -- nvmf/common.sh@446 -- # head -n 1 00:27:56.486 17:37:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:56.486 17:37:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:56.486 17:37:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:56.486 17:37:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:56.486 17:37:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:56.486 17:37:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:56.486 17:37:16 -- host/bdevperf.sh@25 -- # tgt_init 00:27:56.486 17:37:16 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:56.486 17:37:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:56.486 17:37:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.486 17:37:16 -- common/autotest_common.sh@10 -- # set +x 00:27:56.486 17:37:16 -- nvmf/common.sh@469 -- # nvmfpid=2840444 00:27:56.486 17:37:16 -- nvmf/common.sh@470 -- # waitforlisten 2840444 00:27:56.486 17:37:16 -- common/autotest_common.sh@829 -- # '[' -z 2840444 ']' 00:27:56.486 17:37:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.486 17:37:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.486 17:37:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.486 17:37:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.486 17:37:16 -- common/autotest_common.sh@10 -- # set +x 00:27:56.486 17:37:16 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:56.486 [2024-11-09 17:37:16.072975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:56.486 [2024-11-09 17:37:16.073020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.486 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.486 [2024-11-09 17:37:16.141260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.486 [2024-11-09 17:37:16.213933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.486 [2024-11-09 17:37:16.214037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.486 [2024-11-09 17:37:16.214048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.486 [2024-11-09 17:37:16.214057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.486 [2024-11-09 17:37:16.214098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.486 [2024-11-09 17:37:16.214179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.486 [2024-11-09 17:37:16.214181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.425 17:37:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.425 17:37:16 -- common/autotest_common.sh@862 -- # return 0 00:27:57.425 17:37:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:57.425 17:37:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.425 17:37:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.425 17:37:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.425 17:37:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:57.425 17:37:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.425 17:37:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.425 [2024-11-09 17:37:16.966800] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21da860/0x21ded50) succeed. 00:27:57.425 [2024-11-09 17:37:16.975883] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21dbdb0/0x22203f0) succeed. 00:27:57.425 17:37:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.425 17:37:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.425 17:37:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.425 17:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:57.425 Malloc0 00:27:57.425 17:37:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.426 17:37:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.426 17:37:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.426 17:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:57.426 17:37:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.426 17:37:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.426 17:37:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.426 17:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:57.426 17:37:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.426 17:37:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:57.426 17:37:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.426 17:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:57.426 [2024-11-09 17:37:17.121184] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:57.426 17:37:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.426 17:37:17 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:57.426 17:37:17 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:57.426 17:37:17 -- nvmf/common.sh@520 -- # config=() 00:27:57.426 17:37:17 -- nvmf/common.sh@520 -- # local subsystem config 00:27:57.426 17:37:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.426 17:37:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.426 { 00:27:57.426 "params": { 00:27:57.426 "name": "Nvme$subsystem", 00:27:57.426 "trtype": "$TEST_TRANSPORT", 00:27:57.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.426 "adrfam": "ipv4", 00:27:57.426 "trsvcid": "$NVMF_PORT", 00:27:57.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.426 "hdgst": ${hdgst:-false}, 00:27:57.426 "ddgst": ${ddgst:-false} 00:27:57.426 }, 00:27:57.426 "method": "bdev_nvme_attach_controller" 00:27:57.426 } 00:27:57.426 EOF 00:27:57.426 )") 00:27:57.426 17:37:17 -- nvmf/common.sh@542 -- # cat 00:27:57.426 17:37:17 -- nvmf/common.sh@544 -- # jq . 00:27:57.426 17:37:17 -- nvmf/common.sh@545 -- # IFS=, 00:27:57.426 17:37:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:57.426 "params": { 00:27:57.426 "name": "Nvme1", 00:27:57.426 "trtype": "rdma", 00:27:57.426 "traddr": "192.168.100.8", 00:27:57.426 "adrfam": "ipv4", 00:27:57.426 "trsvcid": "4420", 00:27:57.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.426 "hdgst": false, 00:27:57.426 "ddgst": false 00:27:57.426 }, 00:27:57.426 "method": "bdev_nvme_attach_controller" 00:27:57.426 }' 00:27:57.426 [2024-11-09 17:37:17.168648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:57.426 [2024-11-09 17:37:17.168698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840650 ] 00:27:57.684 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.684 [2024-11-09 17:37:17.239061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.684 [2024-11-09 17:37:17.307983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.943 Running I/O for 1 seconds... 00:27:58.881 00:27:58.881 Latency(us) 00:27:58.881 [2024-11-09T16:37:18.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.881 [2024-11-09T16:37:18.651Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:58.881 Verification LBA range: start 0x0 length 0x4000 00:27:58.881 Nvme1n1 : 1.00 25546.85 99.79 0.00 0.00 4986.66 1166.54 12006.20 00:27:58.881 [2024-11-09T16:37:18.651Z] =================================================================================================================== 00:27:58.881 [2024-11-09T16:37:18.651Z] Total : 25546.85 99.79 0.00 0.00 4986.66 1166.54 12006.20 00:27:59.140 17:37:18 -- host/bdevperf.sh@30 -- # bdevperfpid=2840917 00:27:59.140 17:37:18 -- host/bdevperf.sh@32 -- # sleep 3 00:27:59.141 17:37:18 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:59.141 17:37:18 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:59.141 17:37:18 -- nvmf/common.sh@520 -- # config=() 00:27:59.141 17:37:18 -- nvmf/common.sh@520 -- # local subsystem config 00:27:59.141 17:37:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:59.141 17:37:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:59.141 { 00:27:59.141 "params": { 00:27:59.141 "name": "Nvme$subsystem", 00:27:59.141 "trtype": "$TEST_TRANSPORT", 00:27:59.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.141 "adrfam": "ipv4", 00:27:59.141 "trsvcid": "$NVMF_PORT", 00:27:59.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.141 "hdgst": ${hdgst:-false}, 00:27:59.141 "ddgst": ${ddgst:-false} 00:27:59.141 }, 00:27:59.141 "method": "bdev_nvme_attach_controller" 00:27:59.141 } 00:27:59.141 EOF 00:27:59.141 )") 00:27:59.141 17:37:18 -- nvmf/common.sh@542 -- # cat 00:27:59.141 17:37:18 -- nvmf/common.sh@544 -- # jq . 00:27:59.141 17:37:18 -- nvmf/common.sh@545 -- # IFS=, 00:27:59.141 17:37:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:59.141 "params": { 00:27:59.141 "name": "Nvme1", 00:27:59.141 "trtype": "rdma", 00:27:59.141 "traddr": "192.168.100.8", 00:27:59.141 "adrfam": "ipv4", 00:27:59.141 "trsvcid": "4420", 00:27:59.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.141 "hdgst": false, 00:27:59.141 "ddgst": false 00:27:59.141 }, 00:27:59.141 "method": "bdev_nvme_attach_controller" 00:27:59.141 }' 00:27:59.141 [2024-11-09 17:37:18.759070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:59.141 [2024-11-09 17:37:18.759121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840917 ] 00:27:59.141 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.141 [2024-11-09 17:37:18.827890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.141 [2024-11-09 17:37:18.895184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.401 Running I/O for 15 seconds... 00:28:02.692 17:37:21 -- host/bdevperf.sh@33 -- # kill -9 2840444 00:28:02.692 17:37:21 -- host/bdevperf.sh@35 -- # sleep 3 00:28:03.262 [2024-11-09 17:37:22.748452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.262 [2024-11-09 17:37:22.748489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183b00 00:28:03.262 [2024-11-09 17:37:22.748518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183b00 00:28:03.262 [2024-11-09 17:37:22.748538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184300 00:28:03.262 [2024-11-09 17:37:22.748557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.262 [2024-11-09 17:37:22.748576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.262 [2024-11-09 17:37:22.748594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183b00 00:28:03.262 [2024-11-09 17:37:22.748618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x184300 00:28:03.262 [2024-11-09 17:37:22.748637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.262 [2024-11-09 17:37:22.748656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183b00 00:28:03.262 [2024-11-09 17:37:22.748674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.262 [2024-11-09 17:37:22.748685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.748693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.748930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.748986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.748996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183b00 00:28:03.263 [2024-11-09 17:37:22.749356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.263 [2024-11-09 17:37:22.749373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x184300 00:28:03.263 [2024-11-09 17:37:22.749392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.263 [2024-11-09 17:37:22.749402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.749945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.749964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.749985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.749995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.750023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.750042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.750062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.264 [2024-11-09 17:37:22.750101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x184300 00:28:03.264 [2024-11-09 17:37:22.750120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183b00 00:28:03.264 [2024-11-09 17:37:22.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.264 [2024-11-09 17:37:22.750149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183b00 00:28:03.265 [2024-11-09 17:37:22.750767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x184300 00:28:03.265 [2024-11-09 17:37:22.750860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.265 [2024-11-09 17:37:22.750869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.265 [2024-11-09 17:37:22.750879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.266 [2024-11-09 17:37:22.750890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x184300 00:28:03.266 [2024-11-09 17:37:22.750898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.266 [2024-11-09 17:37:22.750908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x184300 00:28:03.266 [2024-11-09 17:37:22.750916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.266 [2024-11-09 17:37:22.750926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.266 [2024-11-09 17:37:22.750934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5369b000 sqhd:5310 p:0 m:0 dnr:0 00:28:03.266 [2024-11-09 17:37:22.752962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:03.266 [2024-11-09 17:37:22.753001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:03.266 [2024-11-09 17:37:22.753029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16888 len:8 PRP1 0x0 PRP2 0x0 00:28:03.266 [2024-11-09 17:37:22.753060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.266 [2024-11-09 17:37:22.753142] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:03.266 [2024-11-09 17:37:22.755113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.266 [2024-11-09 17:37:22.768845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:03.266 [2024-11-09 17:37:22.771582] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:03.266 [2024-11-09 17:37:22.771603] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:03.266 [2024-11-09 17:37:22.771619] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:04.204 [2024-11-09 17:37:23.775664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:04.204 [2024-11-09 17:37:23.775725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.204 [2024-11-09 17:37:23.775908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.204 [2024-11-09 17:37:23.775920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.204 [2024-11-09 17:37:23.775929] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:04.204 [2024-11-09 17:37:23.776868] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:04.204 [2024-11-09 17:37:23.777579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.204 [2024-11-09 17:37:23.788631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.204 [2024-11-09 17:37:23.791041] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:04.204 [2024-11-09 17:37:23.791061] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:04.204 [2024-11-09 17:37:23.791069] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:05.143 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2840444 Killed "${NVMF_APP[@]}" "$@" 00:28:05.143 17:37:24 -- host/bdevperf.sh@36 -- # tgt_init 00:28:05.143 17:37:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:05.143 17:37:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:05.143 17:37:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.143 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 17:37:24 -- nvmf/common.sh@469 -- # nvmfpid=2841856 00:28:05.143 17:37:24 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:05.143 17:37:24 -- nvmf/common.sh@470 -- # waitforlisten 2841856 00:28:05.143 17:37:24 -- common/autotest_common.sh@829 -- # '[' -z 2841856 ']' 00:28:05.143 17:37:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.143 17:37:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.143 17:37:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.143 17:37:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.143 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 [2024-11-09 17:37:24.780042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:05.143 [2024-11-09 17:37:24.780084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.143 [2024-11-09 17:37:24.794996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:05.143 [2024-11-09 17:37:24.795018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.143 [2024-11-09 17:37:24.795105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.143 [2024-11-09 17:37:24.795118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.143 [2024-11-09 17:37:24.795128] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:05.143 [2024-11-09 17:37:24.795991] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:05.143 [2024-11-09 17:37:24.796854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.143 [2024-11-09 17:37:24.807826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.143 [2024-11-09 17:37:24.809908] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:05.143 [2024-11-09 17:37:24.809928] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:05.143 [2024-11-09 17:37:24.809937] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:05.143 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.143 [2024-11-09 17:37:24.852225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:05.402 [2024-11-09 17:37:24.925540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:05.402 [2024-11-09 17:37:24.925645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.402 [2024-11-09 17:37:24.925656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.402 [2024-11-09 17:37:24.925665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.402 [2024-11-09 17:37:24.925711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.402 [2024-11-09 17:37:24.925797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.402 [2024-11-09 17:37:24.925799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.971 17:37:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.971 17:37:25 -- common/autotest_common.sh@862 -- # return 0 00:28:05.971 17:37:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:05.971 17:37:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.971 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.971 17:37:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.971 17:37:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:05.971 17:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.971 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.971 [2024-11-09 17:37:25.668084] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2021860/0x2025d50) succeed. 00:28:05.971 [2024-11-09 17:37:25.677463] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2022db0/0x20673f0) succeed. 00:28:06.231 17:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.231 17:37:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.231 17:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.231 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:06.231 Malloc0 00:28:06.231 17:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.231 17:37:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.231 17:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.231 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:06.231 17:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.231 17:37:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.231 17:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.231 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:06.231 [2024-11-09 17:37:25.813761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:06.231 [2024-11-09 17:37:25.813788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.231 [2024-11-09 17:37:25.813903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.231 [2024-11-09 17:37:25.813915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.231 [2024-11-09 17:37:25.813927] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:06.231 17:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.231 [2024-11-09 17:37:25.815475] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:06.231 17:37:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:06.231 [2024-11-09 17:37:25.815575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.231 17:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.231 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:28:06.231 [2024-11-09 17:37:25.818667] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:06.231 17:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.231 17:37:25 -- host/bdevperf.sh@38 -- # wait 2840917 00:28:06.231 [2024-11-09 17:37:25.827424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.231 [2024-11-09 17:37:25.860064] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:14.444 00:28:14.444 Latency(us) 00:28:14.444 [2024-11-09T16:37:34.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.444 [2024-11-09T16:37:34.214Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:14.444 Verification LBA range: start 0x0 length 0x4000 00:28:14.444 Nvme1n1 : 15.00 18743.62 73.22 16656.02 0.00 3604.13 373.56 1040187.39 00:28:14.444 [2024-11-09T16:37:34.214Z] =================================================================================================================== 00:28:14.444 [2024-11-09T16:37:34.214Z] Total : 18743.62 73.22 16656.02 0.00 3604.13 373.56 1040187.39 00:28:14.703 17:37:34 -- host/bdevperf.sh@39 -- # sync 00:28:14.703 17:37:34 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.703 17:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.703 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:14.703 17:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.703 17:37:34 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:14.703 17:37:34 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:14.703 17:37:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:14.703 17:37:34 -- nvmf/common.sh@116 -- # sync 00:28:14.703 17:37:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:14.703 17:37:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:14.703 17:37:34 -- nvmf/common.sh@119 -- # set +e 00:28:14.703 17:37:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:14.703 17:37:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:14.703 rmmod nvme_rdma 00:28:14.703 rmmod nvme_fabrics 00:28:14.703 17:37:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:14.703 17:37:34 -- nvmf/common.sh@123 -- # set -e 00:28:14.703 17:37:34 -- nvmf/common.sh@124 -- # return 0 00:28:14.703 17:37:34 -- nvmf/common.sh@477 -- # '[' -n 2841856 ']' 00:28:14.703 17:37:34 -- nvmf/common.sh@478 -- # killprocess 2841856 00:28:14.703 17:37:34 -- common/autotest_common.sh@936 -- # '[' -z 2841856 ']' 00:28:14.703 17:37:34 -- common/autotest_common.sh@940 -- # kill -0 2841856 00:28:14.703 17:37:34 -- common/autotest_common.sh@941 -- # uname 00:28:14.703 17:37:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:14.703 17:37:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2841856 00:28:14.961 17:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:14.961 17:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:14.961 17:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2841856' 00:28:14.961 killing process with pid 2841856 00:28:14.961 17:37:34 -- common/autotest_common.sh@955 -- # kill 2841856 00:28:14.961 17:37:34 -- common/autotest_common.sh@960 -- # wait 2841856 00:28:15.221 17:37:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:15.221 17:37:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:15.221 00:28:15.221 real 0m25.398s 00:28:15.221 user 1m4.678s 00:28:15.221 sys 0m6.248s 00:28:15.221 17:37:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:15.221 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:15.221 ************************************ 00:28:15.221 END TEST nvmf_bdevperf 00:28:15.221 ************************************ 00:28:15.221 17:37:34 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:15.221 17:37:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:15.221 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:15.221 ************************************ 00:28:15.221 START TEST nvmf_target_disconnect 00:28:15.221 ************************************ 00:28:15.221 17:37:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:15.221 * Looking for test storage... 00:28:15.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:15.221 17:37:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:15.221 17:37:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:15.221 17:37:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:15.221 17:37:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:15.221 17:37:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:15.221 17:37:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:15.221 17:37:34 -- scripts/common.sh@335 -- # IFS=.-: 00:28:15.221 17:37:34 -- scripts/common.sh@335 -- # read -ra ver1 00:28:15.221 17:37:34 -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.221 17:37:34 -- scripts/common.sh@336 -- # read -ra ver2 00:28:15.221 17:37:34 -- scripts/common.sh@337 -- # local 'op=<' 00:28:15.221 17:37:34 -- scripts/common.sh@339 -- # ver1_l=2 00:28:15.221 17:37:34 -- scripts/common.sh@340 -- # ver2_l=1 00:28:15.221 17:37:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:15.221 17:37:34 -- scripts/common.sh@343 -- # case "$op" in 00:28:15.221 17:37:34 -- scripts/common.sh@344 -- # : 1 00:28:15.221 17:37:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:15.221 17:37:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.221 17:37:34 -- scripts/common.sh@364 -- # decimal 1 00:28:15.221 17:37:34 -- scripts/common.sh@352 -- # local d=1 00:28:15.221 17:37:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.221 17:37:34 -- scripts/common.sh@354 -- # echo 1 00:28:15.221 17:37:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:15.221 17:37:34 -- scripts/common.sh@365 -- # decimal 2 00:28:15.221 17:37:34 -- scripts/common.sh@352 -- # local d=2 00:28:15.221 17:37:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.221 17:37:34 -- scripts/common.sh@354 -- # echo 2 00:28:15.221 17:37:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:15.221 17:37:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:15.221 17:37:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:15.221 17:37:34 -- scripts/common.sh@367 -- # return 0 00:28:15.221 17:37:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:15.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.221 --rc genhtml_branch_coverage=1 00:28:15.221 --rc genhtml_function_coverage=1 00:28:15.221 --rc genhtml_legend=1 00:28:15.221 --rc geninfo_all_blocks=1 00:28:15.221 --rc geninfo_unexecuted_blocks=1 00:28:15.221 00:28:15.221 ' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:15.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.221 --rc genhtml_branch_coverage=1 00:28:15.221 --rc genhtml_function_coverage=1 00:28:15.221 --rc genhtml_legend=1 00:28:15.221 --rc geninfo_all_blocks=1 00:28:15.221 --rc geninfo_unexecuted_blocks=1 00:28:15.221 00:28:15.221 ' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:15.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.221 --rc genhtml_branch_coverage=1 00:28:15.221 --rc genhtml_function_coverage=1 00:28:15.221 --rc genhtml_legend=1 00:28:15.221 --rc geninfo_all_blocks=1 00:28:15.221 --rc geninfo_unexecuted_blocks=1 00:28:15.221 00:28:15.221 ' 00:28:15.221 17:37:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:15.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.221 --rc genhtml_branch_coverage=1 00:28:15.221 --rc genhtml_function_coverage=1 00:28:15.221 --rc genhtml_legend=1 00:28:15.221 --rc geninfo_all_blocks=1 00:28:15.221 --rc geninfo_unexecuted_blocks=1 00:28:15.221 00:28:15.221 ' 00:28:15.221 17:37:34 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.221 17:37:34 -- nvmf/common.sh@7 -- # uname -s 00:28:15.221 17:37:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.221 17:37:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.221 17:37:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.221 17:37:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.221 17:37:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.221 17:37:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.221 17:37:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.221 17:37:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.221 17:37:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.221 17:37:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.481 17:37:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:15.481 17:37:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:15.481 17:37:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.481 17:37:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.481 17:37:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.481 17:37:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:15.481 17:37:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.481 17:37:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.481 17:37:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.482 17:37:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.482 17:37:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.482 17:37:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.482 17:37:35 -- paths/export.sh@5 -- # export PATH 00:28:15.482 17:37:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.482 17:37:35 -- nvmf/common.sh@46 -- # : 0 00:28:15.482 17:37:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:15.482 17:37:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:15.482 17:37:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:15.482 17:37:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.482 17:37:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.482 17:37:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:15.482 17:37:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:15.482 17:37:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:15.482 17:37:35 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:15.482 17:37:35 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:15.482 17:37:35 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:15.482 17:37:35 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:15.482 17:37:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:15.482 17:37:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.482 17:37:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:15.482 17:37:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:15.482 17:37:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:15.482 17:37:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.482 17:37:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.482 17:37:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.482 17:37:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:15.482 17:37:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:15.482 17:37:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:15.482 17:37:35 -- common/autotest_common.sh@10 -- # set +x 00:28:22.056 17:37:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:22.056 17:37:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:22.056 17:37:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:22.056 17:37:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:22.056 17:37:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:22.056 17:37:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:22.056 17:37:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:22.056 17:37:41 -- nvmf/common.sh@294 -- # net_devs=() 00:28:22.056 17:37:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:22.056 17:37:41 -- nvmf/common.sh@295 -- # e810=() 00:28:22.056 17:37:41 -- nvmf/common.sh@295 -- # local -ga e810 00:28:22.056 17:37:41 -- nvmf/common.sh@296 -- # x722=() 00:28:22.056 17:37:41 -- nvmf/common.sh@296 -- # local -ga x722 00:28:22.056 17:37:41 -- nvmf/common.sh@297 -- # mlx=() 00:28:22.056 17:37:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:22.056 17:37:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.056 17:37:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:22.056 17:37:41 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:22.056 17:37:41 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:22.056 17:37:41 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:22.056 17:37:41 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:22.056 17:37:41 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:22.056 17:37:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:22.056 17:37:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:22.056 17:37:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:22.056 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:22.057 17:37:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:22.057 17:37:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:22.057 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:22.057 17:37:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:22.057 17:37:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.057 17:37:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.057 17:37:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:22.057 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.057 17:37:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.057 17:37:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.057 17:37:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:22.057 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.057 17:37:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:22.057 17:37:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:22.057 17:37:41 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:22.057 17:37:41 -- nvmf/common.sh@57 -- # uname 00:28:22.057 17:37:41 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:22.057 17:37:41 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:22.057 17:37:41 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:22.057 17:37:41 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:22.057 17:37:41 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:22.057 17:37:41 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:22.057 17:37:41 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:22.057 17:37:41 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:22.057 17:37:41 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:22.057 17:37:41 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:22.057 17:37:41 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:22.057 17:37:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:22.057 17:37:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:22.057 17:37:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:22.057 17:37:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:22.057 17:37:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@104 -- # continue 2 00:28:22.057 17:37:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@104 -- # continue 2 00:28:22.057 17:37:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:22.057 17:37:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:22.057 17:37:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:22.057 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:22.057 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:22.057 altname enp217s0f0np0 00:28:22.057 altname ens818f0np0 00:28:22.057 inet 192.168.100.8/24 scope global mlx_0_0 00:28:22.057 valid_lft forever preferred_lft forever 00:28:22.057 17:37:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:22.057 17:37:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:22.057 17:37:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:22.057 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:22.057 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:22.057 altname enp217s0f1np1 00:28:22.057 altname ens818f1np1 00:28:22.057 inet 192.168.100.9/24 scope global mlx_0_1 00:28:22.057 valid_lft forever preferred_lft forever 00:28:22.057 17:37:41 -- nvmf/common.sh@410 -- # return 0 00:28:22.057 17:37:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:22.057 17:37:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:22.057 17:37:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:22.057 17:37:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:22.057 17:37:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:22.057 17:37:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:22.057 17:37:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:22.057 17:37:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:22.057 17:37:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:22.057 17:37:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@104 -- # continue 2 00:28:22.057 17:37:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:22.057 17:37:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:22.057 17:37:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@104 -- # continue 2 00:28:22.057 17:37:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:22.057 17:37:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:22.057 17:37:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:22.057 17:37:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:22.057 17:37:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:22.057 17:37:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:22.057 192.168.100.9' 00:28:22.057 17:37:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:22.057 192.168.100.9' 00:28:22.057 17:37:41 -- nvmf/common.sh@445 -- # head -n 1 00:28:22.057 17:37:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:22.057 17:37:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:22.057 192.168.100.9' 00:28:22.057 17:37:41 -- nvmf/common.sh@446 -- # tail -n +2 00:28:22.057 17:37:41 -- nvmf/common.sh@446 -- # head -n 1 00:28:22.057 17:37:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:22.057 17:37:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:22.057 17:37:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:22.057 17:37:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:22.057 17:37:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:22.057 17:37:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:22.057 17:37:41 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:22.057 17:37:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:22.057 17:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.057 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:28:22.057 ************************************ 00:28:22.057 START TEST nvmf_target_disconnect_tc1 00:28:22.057 ************************************ 00:28:22.057 17:37:41 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:28:22.057 17:37:41 -- host/target_disconnect.sh@32 -- # set +e 00:28:22.057 17:37:41 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:22.057 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.058 [2024-11-09 17:37:41.748656] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:22.058 [2024-11-09 17:37:41.748705] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:22.058 [2024-11-09 17:37:41.748719] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:22.996 [2024-11-09 17:37:42.752650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.996 [2024-11-09 17:37:42.752709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:22.996 [2024-11-09 17:37:42.752743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:22.996 [2024-11-09 17:37:42.752801] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:22.996 [2024-11-09 17:37:42.752832] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:22.996 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:22.996 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:22.996 Initializing NVMe Controllers 00:28:22.996 17:37:42 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:22.996 17:37:42 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:22.996 17:37:42 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:28:22.996 17:37:42 -- common/autotest_common.sh@1142 -- # return 0 00:28:22.996 17:37:42 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:22.996 17:37:42 -- host/target_disconnect.sh@41 -- # set -e 00:28:22.996 00:28:22.996 real 0m1.125s 00:28:22.996 user 0m0.859s 00:28:22.996 sys 0m0.255s 00:28:22.996 17:37:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:22.996 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:22.996 ************************************ 00:28:22.996 END TEST nvmf_target_disconnect_tc1 00:28:22.996 ************************************ 00:28:23.256 17:37:42 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:23.256 17:37:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:23.256 17:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:23.256 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.256 ************************************ 00:28:23.256 START TEST nvmf_target_disconnect_tc2 00:28:23.256 ************************************ 00:28:23.256 17:37:42 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:28:23.256 17:37:42 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:23.256 17:37:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:23.256 17:37:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:23.256 17:37:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.256 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.256 17:37:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:23.256 17:37:42 -- nvmf/common.sh@469 -- # nvmfpid=2847195 00:28:23.256 17:37:42 -- nvmf/common.sh@470 -- # waitforlisten 2847195 00:28:23.256 17:37:42 -- common/autotest_common.sh@829 -- # '[' -z 2847195 ']' 00:28:23.256 17:37:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.256 17:37:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.256 17:37:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.256 17:37:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.256 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.256 [2024-11-09 17:37:42.850183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:23.256 [2024-11-09 17:37:42.850227] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.256 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.256 [2024-11-09 17:37:42.934044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.256 [2024-11-09 17:37:43.006841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.256 [2024-11-09 17:37:43.006944] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.256 [2024-11-09 17:37:43.006954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.256 [2024-11-09 17:37:43.006963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.256 [2024-11-09 17:37:43.007079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:23.256 [2024-11-09 17:37:43.007189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:23.256 [2024-11-09 17:37:43.007297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:23.256 [2024-11-09 17:37:43.007298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:24.194 17:37:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.194 17:37:43 -- common/autotest_common.sh@862 -- # return 0 00:28:24.194 17:37:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:24.194 17:37:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.194 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.194 17:37:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.194 17:37:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.194 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.194 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.194 Malloc0 00:28:24.194 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.194 17:37:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:24.194 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.194 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.194 [2024-11-09 17:37:43.780178] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16553c0/0x1660dc0) succeed. 00:28:24.194 [2024-11-09 17:37:43.789518] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16569b0/0x16e0e00) succeed. 00:28:24.194 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.194 17:37:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.194 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.195 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.195 17:37:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.195 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.195 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.195 17:37:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:24.195 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.195 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 [2024-11-09 17:37:43.934924] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:24.195 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.195 17:37:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:24.195 17:37:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.195 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 17:37:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.195 17:37:43 -- host/target_disconnect.sh@50 -- # reconnectpid=2847319 00:28:24.195 17:37:43 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:24.195 17:37:43 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:24.454 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.361 17:37:45 -- host/target_disconnect.sh@53 -- # kill -9 2847195 00:28:26.361 17:37:45 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Read completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 Write completed with error (sct=0, sc=8) 00:28:27.741 starting I/O failed 00:28:27.741 [2024-11-09 17:37:47.118778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2847195 Killed "${NVMF_APP[@]}" "$@" 00:28:28.310 17:37:47 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:28:28.310 17:37:47 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:28.310 17:37:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:28.310 17:37:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.310 17:37:47 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 17:37:47 -- nvmf/common.sh@469 -- # nvmfpid=2848038 00:28:28.310 17:37:47 -- nvmf/common.sh@470 -- # waitforlisten 2848038 00:28:28.310 17:37:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:28.310 17:37:47 -- common/autotest_common.sh@829 -- # '[' -z 2848038 ']' 00:28:28.310 17:37:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.310 17:37:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.310 17:37:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.310 17:37:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.310 17:37:47 -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 [2024-11-09 17:37:48.010831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:28.310 [2024-11-09 17:37:48.010876] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.310 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.570 [2024-11-09 17:37:48.096239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Write completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 Read completed with error (sct=0, sc=8) 00:28:28.570 starting I/O failed 00:28:28.570 [2024-11-09 17:37:48.123917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.570 [2024-11-09 17:37:48.167617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:28.570 [2024-11-09 17:37:48.167719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.570 [2024-11-09 17:37:48.167728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.570 [2024-11-09 17:37:48.167737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.570 [2024-11-09 17:37:48.167856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:28.570 [2024-11-09 17:37:48.167963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:28.570 [2024-11-09 17:37:48.168069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:28.570 [2024-11-09 17:37:48.168071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.139 17:37:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.139 17:37:48 -- common/autotest_common.sh@862 -- # return 0 00:28:29.139 17:37:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:29.139 17:37:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.139 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.139 17:37:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.139 17:37:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.139 17:37:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.139 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.139 Malloc0 00:28:29.139 17:37:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.139 17:37:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:29.139 17:37:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.139 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.399 [2024-11-09 17:37:48.934677] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18c13c0/0x18ccdc0) succeed. 00:28:29.399 [2024-11-09 17:37:48.944197] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18c29b0/0x194ce00) succeed. 00:28:29.399 17:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.399 17:37:49 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.399 17:37:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.399 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.399 17:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.399 17:37:49 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.399 17:37:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.399 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.399 17:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.399 17:37:49 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:29.399 17:37:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.399 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.399 [2024-11-09 17:37:49.086900] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:29.399 17:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.399 17:37:49 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:29.399 17:37:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.399 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.399 17:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.399 17:37:49 -- host/target_disconnect.sh@58 -- # wait 2847319 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Write completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.399 starting I/O failed 00:28:29.399 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Read completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 Write completed with error (sct=0, sc=8) 00:28:29.400 starting I/O failed 00:28:29.400 [2024-11-09 17:37:49.129124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.400 [2024-11-09 17:37:49.140919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.400 [2024-11-09 17:37:49.140975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.400 [2024-11-09 17:37:49.140997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.400 [2024-11-09 17:37:49.141008] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.400 [2024-11-09 17:37:49.141026] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.400 [2024-11-09 17:37:49.151324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.400 qpair failed and we were unable to recover it. 00:28:29.400 [2024-11-09 17:37:49.160837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.400 [2024-11-09 17:37:49.160882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.400 [2024-11-09 17:37:49.160900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.400 [2024-11-09 17:37:49.160910] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.400 [2024-11-09 17:37:49.160919] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.171326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.181102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.181137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.181155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.181168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.181176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.191499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.201098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.201143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.201161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.201171] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.201180] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.211384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.221061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.221103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.221120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.221130] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.221139] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.231518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.241200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.241245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.241262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.241272] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.241281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.251566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.261104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.261144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.261162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.261172] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.261181] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.271672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.281232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.281276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.281293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.281302] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.281311] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.291630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.301153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.301195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.301213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.301222] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.301231] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.311807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.321508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.321545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.321563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.321573] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.321582] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.660 [2024-11-09 17:37:49.331685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.660 qpair failed and we were unable to recover it. 00:28:29.660 [2024-11-09 17:37:49.341324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.660 [2024-11-09 17:37:49.341366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.660 [2024-11-09 17:37:49.341384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.660 [2024-11-09 17:37:49.341394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.660 [2024-11-09 17:37:49.341403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.661 [2024-11-09 17:37:49.351696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-11-09 17:37:49.361386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.661 [2024-11-09 17:37:49.361430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.661 [2024-11-09 17:37:49.361451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.661 [2024-11-09 17:37:49.361466] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.661 [2024-11-09 17:37:49.361475] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.661 [2024-11-09 17:37:49.371864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-11-09 17:37:49.381561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.661 [2024-11-09 17:37:49.381606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.661 [2024-11-09 17:37:49.381623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.661 [2024-11-09 17:37:49.381632] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.661 [2024-11-09 17:37:49.381641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.661 [2024-11-09 17:37:49.392125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-11-09 17:37:49.401612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.661 [2024-11-09 17:37:49.401654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.661 [2024-11-09 17:37:49.401670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.661 [2024-11-09 17:37:49.401680] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.661 [2024-11-09 17:37:49.401688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.661 [2024-11-09 17:37:49.412099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.661 qpair failed and we were unable to recover it. 00:28:29.661 [2024-11-09 17:37:49.421495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.661 [2024-11-09 17:37:49.421543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.661 [2024-11-09 17:37:49.421560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.661 [2024-11-09 17:37:49.421569] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.661 [2024-11-09 17:37:49.421578] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.925 [2024-11-09 17:37:49.432129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.925 qpair failed and we were unable to recover it. 00:28:29.925 [2024-11-09 17:37:49.441681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.925 [2024-11-09 17:37:49.441724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.925 [2024-11-09 17:37:49.441742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.925 [2024-11-09 17:37:49.441752] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.925 [2024-11-09 17:37:49.441764] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.925 [2024-11-09 17:37:49.451991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.925 qpair failed and we were unable to recover it. 00:28:29.925 [2024-11-09 17:37:49.461660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.925 [2024-11-09 17:37:49.461705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.925 [2024-11-09 17:37:49.461722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.925 [2024-11-09 17:37:49.461732] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.925 [2024-11-09 17:37:49.461741] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.925 [2024-11-09 17:37:49.472333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.925 qpair failed and we were unable to recover it. 00:28:29.925 [2024-11-09 17:37:49.481880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.925 [2024-11-09 17:37:49.481916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.925 [2024-11-09 17:37:49.481933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.925 [2024-11-09 17:37:49.481943] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.925 [2024-11-09 17:37:49.481952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.492318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.501932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.501970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.501986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.501996] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.502004] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.512265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.521873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.521913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.521929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.521939] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.521947] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.532362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.542162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.542211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.542227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.542237] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.542246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.552415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.561885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.561929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.561945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.561955] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.561963] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.572565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.582190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.582231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.582247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.582257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.582265] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.592680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.602216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.602255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.602271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.602281] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.602289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.612644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.926 [2024-11-09 17:37:49.622201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.926 [2024-11-09 17:37:49.622239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.926 [2024-11-09 17:37:49.622255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.926 [2024-11-09 17:37:49.622267] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.926 [2024-11-09 17:37:49.622276] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.926 [2024-11-09 17:37:49.632844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.926 qpair failed and we were unable to recover it. 00:28:29.927 [2024-11-09 17:37:49.642212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.927 [2024-11-09 17:37:49.642255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.927 [2024-11-09 17:37:49.642272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.927 [2024-11-09 17:37:49.642283] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.927 [2024-11-09 17:37:49.642293] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.927 [2024-11-09 17:37:49.652762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.927 qpair failed and we were unable to recover it. 00:28:29.927 [2024-11-09 17:37:49.662276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.927 [2024-11-09 17:37:49.662316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.927 [2024-11-09 17:37:49.662333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.927 [2024-11-09 17:37:49.662343] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.927 [2024-11-09 17:37:49.662351] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.927 [2024-11-09 17:37:49.672771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.927 qpair failed and we were unable to recover it. 00:28:29.927 [2024-11-09 17:37:49.682417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.927 [2024-11-09 17:37:49.682467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.927 [2024-11-09 17:37:49.682485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.927 [2024-11-09 17:37:49.682495] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.927 [2024-11-09 17:37:49.682503] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.189 [2024-11-09 17:37:49.692870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.189 qpair failed and we were unable to recover it. 00:28:30.189 [2024-11-09 17:37:49.702552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.189 [2024-11-09 17:37:49.702594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.189 [2024-11-09 17:37:49.702612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.189 [2024-11-09 17:37:49.702622] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.189 [2024-11-09 17:37:49.702632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.189 [2024-11-09 17:37:49.712882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.189 qpair failed and we were unable to recover it. 00:28:30.189 [2024-11-09 17:37:49.722427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.189 [2024-11-09 17:37:49.722468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.189 [2024-11-09 17:37:49.722485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.722494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.722503] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.733032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.742591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.742631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.742648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.742658] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.742667] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.752974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.762568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.762609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.762625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.762634] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.762643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.773158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.782829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.782867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.782883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.782893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.782901] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.793124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.802628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.802669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.802693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.802703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.802711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.813323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.822910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.822948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.822965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.822975] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.822983] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.833303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.842811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.842855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.842872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.842881] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.842890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.853460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.863049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.863095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.863112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.863121] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.863130] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.873475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.882946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.882985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.883002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.883012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.883024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.893598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.903170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.903210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.903226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.903237] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.903245] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.913478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.923056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.923098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.923114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.923124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.923133] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.933647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.190 [2024-11-09 17:37:49.943177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.190 [2024-11-09 17:37:49.943219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.190 [2024-11-09 17:37:49.943235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.190 [2024-11-09 17:37:49.943245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.190 [2024-11-09 17:37:49.943254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.190 [2024-11-09 17:37:49.953527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.190 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:49.963165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:49.963210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:49.963228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:49.963238] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:49.963247] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:49.973782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:49.983225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:49.983270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:49.983288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:49.983297] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:49.983306] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:49.993866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.003487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.003527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.003544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.003553] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.003562] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.013757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.023350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.023396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.023417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.023427] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.023435] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.033877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.043552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.043594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.043616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.043626] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.043635] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.053994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.063487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.063528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.063545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.063559] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.063568] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.073843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.083555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.083598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.083615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.083625] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.083633] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.093898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.103821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.103870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.103888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.103898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.103906] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.114132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.123654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.123696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.123714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.123723] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.123732] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.133927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.143762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.143804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.143821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.143830] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.143840] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.154081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.450 [2024-11-09 17:37:50.163671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.450 [2024-11-09 17:37:50.163712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.450 [2024-11-09 17:37:50.163730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.450 [2024-11-09 17:37:50.163740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.450 [2024-11-09 17:37:50.163749] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.450 [2024-11-09 17:37:50.174109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.450 qpair failed and we were unable to recover it. 00:28:30.451 [2024-11-09 17:37:50.183929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.451 [2024-11-09 17:37:50.183978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.451 [2024-11-09 17:37:50.183994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.451 [2024-11-09 17:37:50.184004] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.451 [2024-11-09 17:37:50.184013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.451 [2024-11-09 17:37:50.194286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.451 qpair failed and we were unable to recover it. 00:28:30.451 [2024-11-09 17:37:50.203816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.451 [2024-11-09 17:37:50.203855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.451 [2024-11-09 17:37:50.203872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.451 [2024-11-09 17:37:50.203883] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.451 [2024-11-09 17:37:50.203892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.451 [2024-11-09 17:37:50.214150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.451 qpair failed and we were unable to recover it. 00:28:30.710 [2024-11-09 17:37:50.223873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.710 [2024-11-09 17:37:50.223918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.710 [2024-11-09 17:37:50.223936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.710 [2024-11-09 17:37:50.223947] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.710 [2024-11-09 17:37:50.223956] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.710 [2024-11-09 17:37:50.234366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.710 qpair failed and we were unable to recover it. 00:28:30.710 [2024-11-09 17:37:50.243962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.710 [2024-11-09 17:37:50.244005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.710 [2024-11-09 17:37:50.244025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.710 [2024-11-09 17:37:50.244034] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.710 [2024-11-09 17:37:50.244043] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.710 [2024-11-09 17:37:50.254486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.710 qpair failed and we were unable to recover it. 00:28:30.710 [2024-11-09 17:37:50.264109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.710 [2024-11-09 17:37:50.264152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.710 [2024-11-09 17:37:50.264170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.710 [2024-11-09 17:37:50.264179] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.710 [2024-11-09 17:37:50.264188] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.710 [2024-11-09 17:37:50.274459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.710 qpair failed and we were unable to recover it. 00:28:30.710 [2024-11-09 17:37:50.284054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.710 [2024-11-09 17:37:50.284094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.710 [2024-11-09 17:37:50.284110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.710 [2024-11-09 17:37:50.284120] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.710 [2024-11-09 17:37:50.284129] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.710 [2024-11-09 17:37:50.294429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.710 qpair failed and we were unable to recover it. 00:28:30.710 [2024-11-09 17:37:50.304115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.304155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.304171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.304181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.304189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.314541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.324176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.324218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.324235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.324245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.324253] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.334502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.344214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.344253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.344270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.344279] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.344288] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.354464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.364354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.364397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.364414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.364423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.364432] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.374714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.384393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.384434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.384450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.384465] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.384474] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.394779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.404512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.404553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.404569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.404578] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.404587] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.414763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.424453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.424504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.424521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.424531] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.424540] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.434859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.444480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.444520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.444537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.444547] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.444555] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.454936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.711 [2024-11-09 17:37:50.464740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.711 [2024-11-09 17:37:50.464785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.711 [2024-11-09 17:37:50.464802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.711 [2024-11-09 17:37:50.464811] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.711 [2024-11-09 17:37:50.464820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.711 [2024-11-09 17:37:50.475036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.711 qpair failed and we were unable to recover it. 00:28:30.971 [2024-11-09 17:37:50.484688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.971 [2024-11-09 17:37:50.484730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.971 [2024-11-09 17:37:50.484748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.971 [2024-11-09 17:37:50.484758] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.971 [2024-11-09 17:37:50.484767] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.971 [2024-11-09 17:37:50.495267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-11-09 17:37:50.504851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.971 [2024-11-09 17:37:50.504889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.971 [2024-11-09 17:37:50.504906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.971 [2024-11-09 17:37:50.504915] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.971 [2024-11-09 17:37:50.504927] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.971 [2024-11-09 17:37:50.515098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-11-09 17:37:50.524691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.971 [2024-11-09 17:37:50.524735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.971 [2024-11-09 17:37:50.524752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.971 [2024-11-09 17:37:50.524762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.971 [2024-11-09 17:37:50.524771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.971 [2024-11-09 17:37:50.535245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-11-09 17:37:50.544822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.971 [2024-11-09 17:37:50.544863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.971 [2024-11-09 17:37:50.544879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.971 [2024-11-09 17:37:50.544889] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.971 [2024-11-09 17:37:50.544898] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.971 [2024-11-09 17:37:50.555319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-11-09 17:37:50.565031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.971 [2024-11-09 17:37:50.565069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.971 [2024-11-09 17:37:50.565087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.971 [2024-11-09 17:37:50.565097] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.971 [2024-11-09 17:37:50.565106] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.575485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.585027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.585069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.585086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.585095] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.585104] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.595379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.605098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.605140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.605157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.605167] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.605176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.615482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.625213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.625253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.625270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.625280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.625289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.635552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.645373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.645413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.645430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.645440] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.645449] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.655682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.665410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.665450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.665471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.665481] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.665490] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.675681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.685469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.685518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.685539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.685548] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.685557] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.695817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.705623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.705662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.705679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.705689] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.705698] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.715877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-11-09 17:37:50.725535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.972 [2024-11-09 17:37:50.725576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.972 [2024-11-09 17:37:50.725594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.972 [2024-11-09 17:37:50.725604] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.972 [2024-11-09 17:37:50.725613] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.972 [2024-11-09 17:37:50.735779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.972 qpair failed and we were unable to recover it. 00:28:31.232 [2024-11-09 17:37:50.745663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.745702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.745718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.745728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.745737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.755905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.765628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.765666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.765683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.765693] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.765702] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.775998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.785713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.785755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.785771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.785780] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.785789] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.795901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.805673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.805714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.805730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.805740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.805749] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.816019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.825880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.825919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.825936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.825946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.825955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.836001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.845807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.845848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.845865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.845874] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.845883] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.856051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.865952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.865987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.866006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.866016] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.866025] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.876188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.885893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.885933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.885950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.885959] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.885968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.896415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.905989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.906030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.906046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.906056] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.906065] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.916135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.926030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.926074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.926091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.926101] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.926110] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.936381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.946147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.946182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.946198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.946208] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.946220] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.956499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.966188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.966231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.966247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.966257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.966265] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.976600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.233 [2024-11-09 17:37:50.986302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.233 [2024-11-09 17:37:50.986344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.233 [2024-11-09 17:37:50.986360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.233 [2024-11-09 17:37:50.986369] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.233 [2024-11-09 17:37:50.986378] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.233 [2024-11-09 17:37:50.996589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.233 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.006380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.006420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.006436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.006446] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.006460] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.016652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.026605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.026646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.026664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.026674] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.026683] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.036801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.046579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.046620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.046637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.046646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.046655] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.056927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.066472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.066513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.066530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.066539] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.066548] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.076918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.086639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.086679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.086695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.086705] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.086714] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.096941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.106657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.106701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.493 [2024-11-09 17:37:51.106717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.493 [2024-11-09 17:37:51.106727] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.493 [2024-11-09 17:37:51.106736] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.493 [2024-11-09 17:37:51.117047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.493 qpair failed and we were unable to recover it. 00:28:31.493 [2024-11-09 17:37:51.126667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.493 [2024-11-09 17:37:51.126709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.126726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.126739] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.126747] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.137002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.146723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.146770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.146787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.146796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.146805] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.157189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.166726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.166766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.166783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.166792] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.166801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.177420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.186955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.186991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.187009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.187018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.187027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.197350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.206920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.206961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.206978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.206989] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.206998] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.217476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.227089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.227135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.227152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.227162] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.227171] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.237382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.494 [2024-11-09 17:37:51.247085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.494 [2024-11-09 17:37:51.247128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.494 [2024-11-09 17:37:51.247144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.494 [2024-11-09 17:37:51.247153] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.494 [2024-11-09 17:37:51.247162] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.494 [2024-11-09 17:37:51.257363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.494 qpair failed and we were unable to recover it. 00:28:31.753 [2024-11-09 17:37:51.267171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.753 [2024-11-09 17:37:51.267213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.753 [2024-11-09 17:37:51.267229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.753 [2024-11-09 17:37:51.267239] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.753 [2024-11-09 17:37:51.267248] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.753 [2024-11-09 17:37:51.277677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.753 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.287181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.287221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.287237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.287247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.287256] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.297557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.307206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.307248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.307270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.307280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.307289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.317781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.327321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.327363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.327380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.327389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.327398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.337849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.347280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.347320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.347337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.347347] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.347355] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.357671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.367390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.367429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.367446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.367459] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.367468] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.378006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.387515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.387563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.387579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.387589] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.387601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.397941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.407347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.407387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.407403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.407413] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.407422] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.417963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.427604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.427645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.427662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.427672] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.427681] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.438079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.447636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.447677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.447694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.447703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.447712] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.458030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.467744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.467788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.467805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.467814] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.467822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.478198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.487781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.487828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.487844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.487854] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.487862] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.498334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:31.754 [2024-11-09 17:37:51.507805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.754 [2024-11-09 17:37:51.507841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.754 [2024-11-09 17:37:51.507857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.754 [2024-11-09 17:37:51.507867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.754 [2024-11-09 17:37:51.507875] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.754 [2024-11-09 17:37:51.518370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.754 qpair failed and we were unable to recover it. 00:28:32.015 [2024-11-09 17:37:51.528008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.015 [2024-11-09 17:37:51.528049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.015 [2024-11-09 17:37:51.528066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.015 [2024-11-09 17:37:51.528076] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.015 [2024-11-09 17:37:51.528085] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.015 [2024-11-09 17:37:51.538348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.015 qpair failed and we were unable to recover it. 00:28:32.015 [2024-11-09 17:37:51.547943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.015 [2024-11-09 17:37:51.547988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.015 [2024-11-09 17:37:51.548005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.015 [2024-11-09 17:37:51.548015] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.015 [2024-11-09 17:37:51.548024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.015 [2024-11-09 17:37:51.558287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.015 qpair failed and we were unable to recover it. 00:28:32.015 [2024-11-09 17:37:51.568045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.015 [2024-11-09 17:37:51.568082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.015 [2024-11-09 17:37:51.568099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.015 [2024-11-09 17:37:51.568112] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.015 [2024-11-09 17:37:51.568121] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.015 [2024-11-09 17:37:51.578471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.015 qpair failed and we were unable to recover it. 00:28:32.015 [2024-11-09 17:37:51.588061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.015 [2024-11-09 17:37:51.588101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.015 [2024-11-09 17:37:51.588118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.015 [2024-11-09 17:37:51.588127] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.015 [2024-11-09 17:37:51.588136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.015 [2024-11-09 17:37:51.598579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.015 qpair failed and we were unable to recover it. 00:28:32.015 [2024-11-09 17:37:51.608126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.015 [2024-11-09 17:37:51.608167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.015 [2024-11-09 17:37:51.608183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.608193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.608202] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.618474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.628199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.628245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.628262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.628272] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.628282] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.638727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.648345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.648383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.648400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.648410] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.648419] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.658583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.668313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.668351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.668367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.668377] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.668386] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.678723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.688300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.688341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.688358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.688367] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.688375] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.698922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.708448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.708497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.708514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.708524] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.708532] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.718827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.728596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.728632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.728650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.728659] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.728668] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.739126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.748525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.748563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.748583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.748592] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.748601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.759005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.016 [2024-11-09 17:37:51.768549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.016 [2024-11-09 17:37:51.768593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.016 [2024-11-09 17:37:51.768609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.016 [2024-11-09 17:37:51.768619] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.016 [2024-11-09 17:37:51.768627] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.016 [2024-11-09 17:37:51.778875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.016 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.788764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.788810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.788827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.788837] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.788846] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.799326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.808841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.808885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.808902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.808911] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.808921] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.819229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.828877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.828917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.828934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.828944] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.828957] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.839134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.848865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.848908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.848924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.848934] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.848942] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.859299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.869038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.869088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.869106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.869116] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.869125] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.879530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.889103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.889145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.889161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.889170] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.889179] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.899479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.909095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.909138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.909154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.909164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.909172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.919484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.929195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.929242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.929259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.929268] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.929277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.939395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.949191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.949230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.949247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.949256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.949265] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.277 [2024-11-09 17:37:51.959347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-11-09 17:37:51.969323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.277 [2024-11-09 17:37:51.969363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.277 [2024-11-09 17:37:51.969379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.277 [2024-11-09 17:37:51.969389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.277 [2024-11-09 17:37:51.969397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.278 [2024-11-09 17:37:51.979601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-11-09 17:37:51.989221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.278 [2024-11-09 17:37:51.989257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.278 [2024-11-09 17:37:51.989272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.278 [2024-11-09 17:37:51.989282] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.278 [2024-11-09 17:37:51.989291] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.278 [2024-11-09 17:37:51.999687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-11-09 17:37:52.009295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.278 [2024-11-09 17:37:52.009336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.278 [2024-11-09 17:37:52.009352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.278 [2024-11-09 17:37:52.009365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.278 [2024-11-09 17:37:52.009374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.278 [2024-11-09 17:37:52.019556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-11-09 17:37:52.029390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.278 [2024-11-09 17:37:52.029435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.278 [2024-11-09 17:37:52.029452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.278 [2024-11-09 17:37:52.029467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.278 [2024-11-09 17:37:52.029476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.278 [2024-11-09 17:37:52.039806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.049471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.049509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.049525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.049535] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.049546] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.059897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.069592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.069634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.069650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.069659] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.069668] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.079925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.089569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.089612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.089628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.089638] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.089646] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.100067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.109837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.109877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.109893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.109903] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.109912] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.119977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.129646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.129681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.129698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.129707] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.129716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.140045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.538 [2024-11-09 17:37:52.149848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.538 [2024-11-09 17:37:52.149893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.538 [2024-11-09 17:37:52.149911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.538 [2024-11-09 17:37:52.149920] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.538 [2024-11-09 17:37:52.149929] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.538 [2024-11-09 17:37:52.160062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.538 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.169813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.169854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.169871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.169880] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.169889] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.180291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.189911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.189954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.189974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.189984] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.189992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.200176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.209862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.209904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.209922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.209931] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.209940] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.220289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.230147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.230184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.230201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.230211] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.230219] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.240302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.250025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.250065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.250081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.250091] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.250099] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.260396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.270046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.270085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.270101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.270111] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.270119] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.280575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.539 [2024-11-09 17:37:52.290047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.539 [2024-11-09 17:37:52.290086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.539 [2024-11-09 17:37:52.290102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.539 [2024-11-09 17:37:52.290112] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.539 [2024-11-09 17:37:52.290120] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.539 [2024-11-09 17:37:52.300522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.539 qpair failed and we were unable to recover it. 00:28:32.799 [2024-11-09 17:37:52.310307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.799 [2024-11-09 17:37:52.310350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.799 [2024-11-09 17:37:52.310367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.310377] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.310386] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.320542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.330214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.330254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.330271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.330281] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.330290] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.340662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.350310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.350351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.350368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.350378] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.350387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.360718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.370416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.370461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.370478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.370488] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.370497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.380676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.390551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.390594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.390611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.390621] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.390630] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.400796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.410399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.410443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.410465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.410475] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.410484] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.420910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.430665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.430703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.430720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.430730] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.430738] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.441019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.450677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.450718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.450736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.450746] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.450758] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.460863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.470754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.470796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.470813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.470823] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.470832] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.481067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.490737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.490778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.490794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.490804] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.490813] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.501092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.510896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.510937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.510954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.510963] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.510972] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.521220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.531016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.531059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.531077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.531086] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.531095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.541226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:32.800 [2024-11-09 17:37:52.551053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.800 [2024-11-09 17:37:52.551094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.800 [2024-11-09 17:37:52.551110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.800 [2024-11-09 17:37:52.551120] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.800 [2024-11-09 17:37:52.551129] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.800 [2024-11-09 17:37:52.561453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.800 qpair failed and we were unable to recover it. 00:28:33.060 [2024-11-09 17:37:52.571005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.060 [2024-11-09 17:37:52.571049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.060 [2024-11-09 17:37:52.571067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.060 [2024-11-09 17:37:52.571076] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.060 [2024-11-09 17:37:52.571085] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.060 [2024-11-09 17:37:52.581468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.060 qpair failed and we were unable to recover it. 00:28:33.060 [2024-11-09 17:37:52.591079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.060 [2024-11-09 17:37:52.591120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.060 [2024-11-09 17:37:52.591137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.060 [2024-11-09 17:37:52.591147] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.060 [2024-11-09 17:37:52.591156] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.060 [2024-11-09 17:37:52.601304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.060 qpair failed and we were unable to recover it. 00:28:33.060 [2024-11-09 17:37:52.611006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.060 [2024-11-09 17:37:52.611047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.060 [2024-11-09 17:37:52.611063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.060 [2024-11-09 17:37:52.611073] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.611082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.621490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.631107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.631149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.631169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.631179] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.631189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.641503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.651206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.651246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.651263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.651272] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.651281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.661649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.671169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.671209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.671226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.671235] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.671244] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.681710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.691323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.691368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.691384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.691394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.691403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.701611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.711409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.711447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.711470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.711480] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.711489] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.721794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.731388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.731431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.731448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.731464] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.731472] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.741740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.751474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.751514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.751531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.751541] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.751550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.761752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.771469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.771513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.771530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.771540] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.771550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.782052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.791718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.791760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.791776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.791786] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.791796] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.802211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.061 [2024-11-09 17:37:52.811737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.061 [2024-11-09 17:37:52.811778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.061 [2024-11-09 17:37:52.811798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.061 [2024-11-09 17:37:52.811807] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.061 [2024-11-09 17:37:52.811816] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.061 [2024-11-09 17:37:52.821990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.061 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-09 17:37:52.831781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.321 [2024-11-09 17:37:52.831826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.321 [2024-11-09 17:37:52.831843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.321 [2024-11-09 17:37:52.831853] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.321 [2024-11-09 17:37:52.831861] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.321 [2024-11-09 17:37:52.842254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-09 17:37:52.851783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.321 [2024-11-09 17:37:52.851824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.851840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.851850] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.851859] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.862170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.871854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.871897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.871914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.871924] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.871933] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.882141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.891977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.892020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.892037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.892047] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.892063] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.902341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.911934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.911980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.911997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.912006] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.912015] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.922419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.931978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.932021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.932039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.932049] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.932058] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.942469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.952133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.952176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.952193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.952203] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.952212] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.962446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.972141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.972180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.972198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.972207] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.972216] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:52.982495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:52.992248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:52.992291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:52.992308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:52.992317] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:52.992326] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:53.002616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:53.012218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:53.012261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:53.012277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:53.012287] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:53.012296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:53.022702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:53.032324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:53.032365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:53.032382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:53.032392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:53.032401] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:53.042719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:53.052383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:53.052424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:53.052440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:53.052450] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:53.052471] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:53.062811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-09 17:37:53.072356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.322 [2024-11-09 17:37:53.072401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.322 [2024-11-09 17:37:53.072418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.322 [2024-11-09 17:37:53.072432] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.322 [2024-11-09 17:37:53.072441] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.322 [2024-11-09 17:37:53.082884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.582 [2024-11-09 17:37:53.092509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.092546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.092562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.092572] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.092581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.102855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.112659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.112701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.112718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.112728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.112737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.123011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.132709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.132749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.132766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.132776] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.132785] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.143040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.152765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.152809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.152825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.152835] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.152843] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.163134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.172682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.172721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.172738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.172747] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.172756] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.183116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.192861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.192895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.192912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.192922] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.192930] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.203216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.212913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.212953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.212969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.212978] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.212987] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.223187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.232966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.233011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.233028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.233038] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.233046] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.243307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.252865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.252908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.252927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.252937] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.252946] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.263217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.273060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.273100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.273116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.273126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.273134] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.283427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.293069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.293111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.293127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.293136] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.293145] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.303418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.313166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.313210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.313226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.313235] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.313244] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.323571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.583 [2024-11-09 17:37:53.333260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.583 [2024-11-09 17:37:53.333300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.583 [2024-11-09 17:37:53.333317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.583 [2024-11-09 17:37:53.333327] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.583 [2024-11-09 17:37:53.333339] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.583 [2024-11-09 17:37:53.343716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.583 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.353368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.353412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.353429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.353438] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.353447] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.844 [2024-11-09 17:37:53.363601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.373366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.373406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.373422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.373432] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.373440] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.844 [2024-11-09 17:37:53.383889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.393425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.393476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.393492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.393502] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.393510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.844 [2024-11-09 17:37:53.403920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.413425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.413475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.413491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.413501] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.413510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.844 [2024-11-09 17:37:53.423900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.433621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.433664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.433692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.433706] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.433719] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.443977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.453634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.453677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.453695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.453705] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.453714] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.464064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.473708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.473748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.473764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.473774] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.473783] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.484312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.493771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.493810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.493827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.493837] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.493846] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.504295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.513879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.513921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.513938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.513951] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.513959] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.524330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.533866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.533906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.533922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.533932] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.533941] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.844 [2024-11-09 17:37:53.544405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.844 qpair failed and we were unable to recover it. 00:28:33.844 [2024-11-09 17:37:53.553997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.844 [2024-11-09 17:37:53.554037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.844 [2024-11-09 17:37:53.554053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.844 [2024-11-09 17:37:53.554063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.844 [2024-11-09 17:37:53.554071] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.845 [2024-11-09 17:37:53.564329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.845 qpair failed and we were unable to recover it. 00:28:33.845 [2024-11-09 17:37:53.574074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.845 [2024-11-09 17:37:53.574117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.845 [2024-11-09 17:37:53.574133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.845 [2024-11-09 17:37:53.574142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.845 [2024-11-09 17:37:53.574151] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.845 [2024-11-09 17:37:53.584373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.845 qpair failed and we were unable to recover it. 00:28:33.845 [2024-11-09 17:37:53.594134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.845 [2024-11-09 17:37:53.594175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.845 [2024-11-09 17:37:53.594192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.845 [2024-11-09 17:37:53.594201] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.845 [2024-11-09 17:37:53.594210] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:33.845 [2024-11-09 17:37:53.604496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.845 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-09 17:37:53.614003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-11-09 17:37:53.614047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-11-09 17:37:53.614064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-11-09 17:37:53.614074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-11-09 17:37:53.614082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.105 [2024-11-09 17:37:53.624482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-09 17:37:53.634201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-11-09 17:37:53.634244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-11-09 17:37:53.634261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-11-09 17:37:53.634270] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-11-09 17:37:53.634279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.105 [2024-11-09 17:37:53.644561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-09 17:37:53.654202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-11-09 17:37:53.654245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-11-09 17:37:53.654262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-11-09 17:37:53.654271] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-11-09 17:37:53.654280] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.105 [2024-11-09 17:37:53.664775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-09 17:37:53.674321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-11-09 17:37:53.674366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-11-09 17:37:53.674382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-11-09 17:37:53.674392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.105 [2024-11-09 17:37:53.674400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.105 [2024-11-09 17:37:53.684910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-09 17:37:53.694413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.105 [2024-11-09 17:37:53.694452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.105 [2024-11-09 17:37:53.694479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.105 [2024-11-09 17:37:53.694489] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.694497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.704874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.714438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.714489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.714506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.714516] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.714525] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.724989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.734540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.734579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.734596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.734605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.734614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.744923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.754603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.754643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.754660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.754669] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.754678] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.764987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.774607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.774648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.774664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.774673] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.774685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.785233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.794754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.794802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.794819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.794828] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.794837] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.805146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.814800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.814844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.814861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.814870] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.814879] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.825190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.834824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.834862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.834878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.834888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.834896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.845292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-09 17:37:53.854891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.106 [2024-11-09 17:37:53.854934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.106 [2024-11-09 17:37:53.854949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.106 [2024-11-09 17:37:53.854959] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.106 [2024-11-09 17:37:53.854968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.106 [2024-11-09 17:37:53.865266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-09 17:37:53.874906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.366 [2024-11-09 17:37:53.874958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.366 [2024-11-09 17:37:53.874974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.366 [2024-11-09 17:37:53.874984] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.366 [2024-11-09 17:37:53.874993] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.366 [2024-11-09 17:37:53.885330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-09 17:37:53.895038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.895078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.895095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.895104] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.895113] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:53.905412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:53.915037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.915071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.915087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.915097] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.915105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:53.925391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:53.935149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.935190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.935207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.935217] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.935226] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:53.945505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:53.955198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.955236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.955252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.955266] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.955275] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:53.965261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:53.975310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.975353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.975370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.975379] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.975388] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:53.985618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:53.995301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:53.995342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:53.995359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:53.995368] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:53.995377] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.005694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.015377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.015419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.015435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.015444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.015453] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.025778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.035409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.035461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.035477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.035487] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.035495] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.045851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.055520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.055564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.055580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.055589] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.055598] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.065959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.075681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.075719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.075735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.075744] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.075753] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.086107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.095719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.095761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.095777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.095786] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.095795] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.106068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-09 17:37:54.115787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.367 [2024-11-09 17:37:54.115827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.367 [2024-11-09 17:37:54.115844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.367 [2024-11-09 17:37:54.115853] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.367 [2024-11-09 17:37:54.115862] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.367 [2024-11-09 17:37:54.126270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.627 [2024-11-09 17:37:54.135803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.627 [2024-11-09 17:37:54.135840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.627 [2024-11-09 17:37:54.135859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.627 [2024-11-09 17:37:54.135869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.627 [2024-11-09 17:37:54.135878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.627 [2024-11-09 17:37:54.146168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.627 qpair failed and we were unable to recover it. 00:28:34.627 [2024-11-09 17:37:54.155870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.627 [2024-11-09 17:37:54.155904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.627 [2024-11-09 17:37:54.155920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.627 [2024-11-09 17:37:54.155930] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.627 [2024-11-09 17:37:54.155938] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.627 [2024-11-09 17:37:54.166161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.627 qpair failed and we were unable to recover it. 00:28:34.627 [2024-11-09 17:37:54.176023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.627 [2024-11-09 17:37:54.176064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.627 [2024-11-09 17:37:54.176080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.627 [2024-11-09 17:37:54.176090] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.627 [2024-11-09 17:37:54.176098] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:34.627 [2024-11-09 17:37:54.186258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.627 qpair failed and we were unable to recover it. 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Write completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 Read completed with error (sct=0, sc=8) 00:28:35.566 starting I/O failed 00:28:35.566 [2024-11-09 17:37:55.191444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.566 [2024-11-09 17:37:55.198574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.566 [2024-11-09 17:37:55.198625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.566 [2024-11-09 17:37:55.198645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.566 [2024-11-09 17:37:55.198655] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.566 [2024-11-09 17:37:55.198664] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c97c0 00:28:35.566 [2024-11-09 17:37:55.209223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.566 qpair failed and we were unable to recover it. 00:28:35.566 [2024-11-09 17:37:55.218810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.566 [2024-11-09 17:37:55.218854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.566 [2024-11-09 17:37:55.218872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.566 [2024-11-09 17:37:55.218881] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.566 [2024-11-09 17:37:55.218891] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c97c0 00:28:35.567 [2024-11-09 17:37:55.229345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.567 qpair failed and we were unable to recover it. 00:28:35.567 [2024-11-09 17:37:55.238912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.567 [2024-11-09 17:37:55.238951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.567 [2024-11-09 17:37:55.238973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.567 [2024-11-09 17:37:55.238983] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.567 [2024-11-09 17:37:55.238992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:35.567 [2024-11-09 17:37:55.249268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.567 qpair failed and we were unable to recover it. 00:28:35.567 [2024-11-09 17:37:55.258953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.567 [2024-11-09 17:37:55.258994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.567 [2024-11-09 17:37:55.259011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.567 [2024-11-09 17:37:55.259021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.567 [2024-11-09 17:37:55.259030] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:35.567 [2024-11-09 17:37:55.269424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.567 qpair failed and we were unable to recover it. 00:28:35.567 [2024-11-09 17:37:55.269569] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:35.567 A controller has encountered a failure and is being reset. 00:28:35.567 [2024-11-09 17:37:55.279104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.567 [2024-11-09 17:37:55.279150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.567 [2024-11-09 17:37:55.279178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.567 [2024-11-09 17:37:55.279192] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.567 [2024-11-09 17:37:55.279205] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:35.567 [2024-11-09 17:37:55.289541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:35.567 qpair failed and we were unable to recover it. 00:28:35.567 [2024-11-09 17:37:55.298987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.567 [2024-11-09 17:37:55.299030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.567 [2024-11-09 17:37:55.299048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.567 [2024-11-09 17:37:55.299057] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.567 [2024-11-09 17:37:55.299066] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:35.567 [2024-11-09 17:37:55.309435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:35.567 qpair failed and we were unable to recover it. 00:28:35.567 [2024-11-09 17:37:55.309562] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:35.826 [2024-11-09 17:37:55.340406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:35.826 Controller properly reset. 00:28:35.826 Initializing NVMe Controllers 00:28:35.826 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.826 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:35.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:35.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:35.826 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:35.827 Initialization complete. Launching workers. 00:28:35.827 Starting thread on core 1 00:28:35.827 Starting thread on core 2 00:28:35.827 Starting thread on core 3 00:28:35.827 Starting thread on core 0 00:28:35.827 17:37:55 -- host/target_disconnect.sh@59 -- # sync 00:28:35.827 00:28:35.827 real 0m12.578s 00:28:35.827 user 0m26.199s 00:28:35.827 sys 0m3.062s 00:28:35.827 17:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:35.827 17:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:35.827 ************************************ 00:28:35.827 END TEST nvmf_target_disconnect_tc2 00:28:35.827 ************************************ 00:28:35.827 17:37:55 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:28:35.827 17:37:55 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:28:35.827 17:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:35.827 17:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:35.827 17:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:35.827 ************************************ 00:28:35.827 START TEST nvmf_target_disconnect_tc3 00:28:35.827 ************************************ 00:28:35.827 17:37:55 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:28:35.827 17:37:55 -- host/target_disconnect.sh@65 -- # reconnectpid=2849323 00:28:35.827 17:37:55 -- host/target_disconnect.sh@67 -- # sleep 2 00:28:35.827 17:37:55 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:28:35.827 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.732 17:37:57 -- host/target_disconnect.sh@68 -- # kill -9 2848038 00:28:37.732 17:37:57 -- host/target_disconnect.sh@70 -- # sleep 2 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Write completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 Read completed with error (sct=0, sc=8) 00:28:39.112 starting I/O failed 00:28:39.112 [2024-11-09 17:37:58.635711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.050 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 2848038 Killed "${NVMF_APP[@]}" "$@" 00:28:40.050 17:37:59 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:28:40.050 17:37:59 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:40.050 17:37:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:40.050 17:37:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.050 17:37:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.050 17:37:59 -- nvmf/common.sh@469 -- # nvmfpid=2849962 00:28:40.050 17:37:59 -- nvmf/common.sh@470 -- # waitforlisten 2849962 00:28:40.050 17:37:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:40.050 17:37:59 -- common/autotest_common.sh@829 -- # '[' -z 2849962 ']' 00:28:40.050 17:37:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.050 17:37:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.050 17:37:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.050 17:37:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.050 17:37:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.050 [2024-11-09 17:37:59.511022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:40.050 [2024-11-09 17:37:59.511075] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.050 [2024-11-09 17:37:59.596693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Write completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 Read completed with error (sct=0, sc=8) 00:28:40.050 starting I/O failed 00:28:40.050 [2024-11-09 17:37:59.640696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.050 [2024-11-09 17:37:59.665063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:40.050 [2024-11-09 17:37:59.665167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.050 [2024-11-09 17:37:59.665178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.050 [2024-11-09 17:37:59.665186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.050 [2024-11-09 17:37:59.665308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:40.051 [2024-11-09 17:37:59.665421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:40.051 [2024-11-09 17:37:59.665530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.051 [2024-11-09 17:37:59.665531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:40.619 17:38:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.619 17:38:00 -- common/autotest_common.sh@862 -- # return 0 00:28:40.619 17:38:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:40.619 17:38:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.619 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.619 17:38:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.619 17:38:00 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.619 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.619 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.878 Malloc0 00:28:40.878 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.878 17:38:00 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:40.878 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.878 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.878 [2024-11-09 17:38:00.426620] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8b23c0/0x8bddc0) succeed. 00:28:40.878 [2024-11-09 17:38:00.436342] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8b39b0/0x93de00) succeed. 00:28:40.878 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.878 17:38:00 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.878 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.879 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.879 17:38:00 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.879 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.879 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.879 17:38:00 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:28:40.879 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.879 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 [2024-11-09 17:38:00.575450] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:40.879 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.879 17:38:00 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:28:40.879 17:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.879 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 17:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.879 17:38:00 -- host/target_disconnect.sh@73 -- # wait 2849323 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Read completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 Write completed with error (sct=0, sc=8) 00:28:40.879 starting I/O failed 00:28:40.879 [2024-11-09 17:38:00.645917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Read completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 Write completed with error (sct=0, sc=8) 00:28:42.257 starting I/O failed 00:28:42.257 [2024-11-09 17:38:01.650886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:42.257 [2024-11-09 17:38:01.652509] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:42.257 [2024-11-09 17:38:01.652528] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:42.257 [2024-11-09 17:38:01.652543] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:43.195 [2024-11-09 17:38:02.656476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.195 qpair failed and we were unable to recover it. 00:28:43.195 [2024-11-09 17:38:02.658088] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:43.195 [2024-11-09 17:38:02.658105] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:43.195 [2024-11-09 17:38:02.658113] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:44.132 [2024-11-09 17:38:03.662030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:44.132 qpair failed and we were unable to recover it. 00:28:44.132 [2024-11-09 17:38:03.663420] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:44.132 [2024-11-09 17:38:03.663437] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:44.132 [2024-11-09 17:38:03.663444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:45.071 [2024-11-09 17:38:04.667328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.071 qpair failed and we were unable to recover it. 00:28:45.071 [2024-11-09 17:38:04.668810] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:45.071 [2024-11-09 17:38:04.668825] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:45.071 [2024-11-09 17:38:04.668840] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:46.010 [2024-11-09 17:38:05.672573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-11-09 17:38:05.673894] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:46.010 [2024-11-09 17:38:05.673909] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:46.010 [2024-11-09 17:38:05.673917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:46.997 [2024-11-09 17:38:06.677890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-11-09 17:38:06.679330] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:46.997 [2024-11-09 17:38:06.679347] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:46.997 [2024-11-09 17:38:06.679355] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:47.990 [2024-11-09 17:38:07.683288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:47.990 qpair failed and we were unable to recover it. 00:28:47.990 [2024-11-09 17:38:07.685087] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:47.990 [2024-11-09 17:38:07.685110] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:47.990 [2024-11-09 17:38:07.685118] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.928 [2024-11-09 17:38:08.688990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.929 qpair failed and we were unable to recover it. 00:28:48.929 [2024-11-09 17:38:08.690676] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:48.929 [2024-11-09 17:38:08.690693] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:48.929 [2024-11-09 17:38:08.690702] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.308 [2024-11-09 17:38:09.694542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.308 qpair failed and we were unable to recover it. 00:28:50.308 [2024-11-09 17:38:09.696445] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:50.308 [2024-11-09 17:38:09.696478] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:50.308 [2024-11-09 17:38:09.696490] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:51.247 [2024-11-09 17:38:10.700484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.247 qpair failed and we were unable to recover it. 00:28:51.247 [2024-11-09 17:38:10.702247] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.247 [2024-11-09 17:38:10.702278] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.247 [2024-11-09 17:38:10.702290] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:52.185 [2024-11-09 17:38:11.706340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:52.185 qpair failed and we were unable to recover it. 00:28:52.185 [2024-11-09 17:38:11.707804] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:52.185 [2024-11-09 17:38:11.707821] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:52.185 [2024-11-09 17:38:11.707829] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:53.124 [2024-11-09 17:38:12.711774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:53.124 qpair failed and we were unable to recover it. 00:28:53.124 [2024-11-09 17:38:12.711945] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:53.124 A controller has encountered a failure and is being reset. 00:28:53.124 Resorting to new failover address 192.168.100.9 00:28:53.124 [2024-11-09 17:38:12.713507] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:53.124 [2024-11-09 17:38:12.713534] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:53.124 [2024-11-09 17:38:12.713546] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:54.063 [2024-11-09 17:38:13.717449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.063 qpair failed and we were unable to recover it. 00:28:54.063 [2024-11-09 17:38:13.717579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.063 [2024-11-09 17:38:13.717692] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:54.063 [2024-11-09 17:38:13.748109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:54.063 Controller properly reset. 00:28:54.063 Initializing NVMe Controllers 00:28:54.063 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.063 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:54.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:54.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:54.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:54.063 Initialization complete. Launching workers. 00:28:54.063 Starting thread on core 1 00:28:54.063 Starting thread on core 2 00:28:54.063 Starting thread on core 3 00:28:54.063 Starting thread on core 0 00:28:54.063 17:38:13 -- host/target_disconnect.sh@74 -- # sync 00:28:54.063 00:28:54.063 real 0m18.349s 00:28:54.063 user 0m56.245s 00:28:54.063 sys 0m5.719s 00:28:54.063 17:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:54.063 17:38:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.063 ************************************ 00:28:54.063 END TEST nvmf_target_disconnect_tc3 00:28:54.063 ************************************ 00:28:54.323 17:38:13 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:54.323 17:38:13 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:54.323 17:38:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:54.323 17:38:13 -- nvmf/common.sh@116 -- # sync 00:28:54.323 17:38:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:54.323 17:38:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:54.323 17:38:13 -- nvmf/common.sh@119 -- # set +e 00:28:54.323 17:38:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:54.323 17:38:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:54.323 rmmod nvme_rdma 00:28:54.323 rmmod nvme_fabrics 00:28:54.323 17:38:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:54.323 17:38:13 -- nvmf/common.sh@123 -- # set -e 00:28:54.323 17:38:13 -- nvmf/common.sh@124 -- # return 0 00:28:54.323 17:38:13 -- nvmf/common.sh@477 -- # '[' -n 2849962 ']' 00:28:54.323 17:38:13 -- nvmf/common.sh@478 -- # killprocess 2849962 00:28:54.323 17:38:13 -- common/autotest_common.sh@936 -- # '[' -z 2849962 ']' 00:28:54.323 17:38:13 -- common/autotest_common.sh@940 -- # kill -0 2849962 00:28:54.323 17:38:13 -- common/autotest_common.sh@941 -- # uname 00:28:54.323 17:38:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.323 17:38:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2849962 00:28:54.323 17:38:13 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:28:54.323 17:38:13 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:28:54.323 17:38:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2849962' 00:28:54.323 killing process with pid 2849962 00:28:54.323 17:38:13 -- common/autotest_common.sh@955 -- # kill 2849962 00:28:54.323 17:38:13 -- common/autotest_common.sh@960 -- # wait 2849962 00:28:54.582 17:38:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:54.582 17:38:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:54.582 00:28:54.582 real 0m39.444s 00:28:54.582 user 2m23.054s 00:28:54.582 sys 0m14.541s 00:28:54.582 17:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:54.582 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:54.582 ************************************ 00:28:54.582 END TEST nvmf_target_disconnect 00:28:54.582 ************************************ 00:28:54.582 17:38:14 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:28:54.582 17:38:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.582 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:54.582 17:38:14 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:54.582 00:28:54.582 real 21m14.518s 00:28:54.582 user 68m0.948s 00:28:54.582 sys 4m58.414s 00:28:54.582 17:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:54.582 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:54.582 ************************************ 00:28:54.582 END TEST nvmf_rdma 00:28:54.582 ************************************ 00:28:54.842 17:38:14 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:54.842 17:38:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:54.842 17:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.842 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:54.842 ************************************ 00:28:54.842 START TEST spdkcli_nvmf_rdma 00:28:54.842 ************************************ 00:28:54.842 17:38:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:54.842 * Looking for test storage... 00:28:54.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:28:54.842 17:38:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:54.842 17:38:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:54.842 17:38:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:54.842 17:38:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:54.842 17:38:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:54.842 17:38:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:54.842 17:38:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:54.842 17:38:14 -- scripts/common.sh@335 -- # IFS=.-: 00:28:54.842 17:38:14 -- scripts/common.sh@335 -- # read -ra ver1 00:28:54.842 17:38:14 -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.843 17:38:14 -- scripts/common.sh@336 -- # read -ra ver2 00:28:54.843 17:38:14 -- scripts/common.sh@337 -- # local 'op=<' 00:28:54.843 17:38:14 -- scripts/common.sh@339 -- # ver1_l=2 00:28:54.843 17:38:14 -- scripts/common.sh@340 -- # ver2_l=1 00:28:54.843 17:38:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:54.843 17:38:14 -- scripts/common.sh@343 -- # case "$op" in 00:28:54.843 17:38:14 -- scripts/common.sh@344 -- # : 1 00:28:54.843 17:38:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:54.843 17:38:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.843 17:38:14 -- scripts/common.sh@364 -- # decimal 1 00:28:54.843 17:38:14 -- scripts/common.sh@352 -- # local d=1 00:28:54.843 17:38:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.843 17:38:14 -- scripts/common.sh@354 -- # echo 1 00:28:54.843 17:38:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:54.843 17:38:14 -- scripts/common.sh@365 -- # decimal 2 00:28:54.843 17:38:14 -- scripts/common.sh@352 -- # local d=2 00:28:54.843 17:38:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.843 17:38:14 -- scripts/common.sh@354 -- # echo 2 00:28:54.843 17:38:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:54.843 17:38:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:54.843 17:38:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:54.843 17:38:14 -- scripts/common.sh@367 -- # return 0 00:28:54.843 17:38:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.843 17:38:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.843 --rc genhtml_branch_coverage=1 00:28:54.843 --rc genhtml_function_coverage=1 00:28:54.843 --rc genhtml_legend=1 00:28:54.843 --rc geninfo_all_blocks=1 00:28:54.843 --rc geninfo_unexecuted_blocks=1 00:28:54.843 00:28:54.843 ' 00:28:54.843 17:38:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.843 --rc genhtml_branch_coverage=1 00:28:54.843 --rc genhtml_function_coverage=1 00:28:54.843 --rc genhtml_legend=1 00:28:54.843 --rc geninfo_all_blocks=1 00:28:54.843 --rc geninfo_unexecuted_blocks=1 00:28:54.843 00:28:54.843 ' 00:28:54.843 17:38:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.843 --rc genhtml_branch_coverage=1 00:28:54.843 --rc genhtml_function_coverage=1 00:28:54.843 --rc genhtml_legend=1 00:28:54.843 --rc geninfo_all_blocks=1 00:28:54.843 --rc geninfo_unexecuted_blocks=1 00:28:54.843 00:28:54.843 ' 00:28:54.843 17:38:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.843 --rc genhtml_branch_coverage=1 00:28:54.843 --rc genhtml_function_coverage=1 00:28:54.843 --rc genhtml_legend=1 00:28:54.843 --rc geninfo_all_blocks=1 00:28:54.843 --rc geninfo_unexecuted_blocks=1 00:28:54.843 00:28:54.843 ' 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:28:54.843 17:38:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:54.843 17:38:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.843 17:38:14 -- nvmf/common.sh@7 -- # uname -s 00:28:54.843 17:38:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.843 17:38:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.843 17:38:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.843 17:38:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.843 17:38:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.843 17:38:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.843 17:38:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.843 17:38:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.843 17:38:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.843 17:38:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.843 17:38:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:54.843 17:38:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:54.843 17:38:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.843 17:38:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.843 17:38:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.843 17:38:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:54.843 17:38:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.843 17:38:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.843 17:38:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.843 17:38:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.843 17:38:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.843 17:38:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.843 17:38:14 -- paths/export.sh@5 -- # export PATH 00:28:54.843 17:38:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.843 17:38:14 -- nvmf/common.sh@46 -- # : 0 00:28:54.843 17:38:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:54.843 17:38:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:54.843 17:38:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:54.843 17:38:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.843 17:38:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.843 17:38:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:54.843 17:38:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:54.843 17:38:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:54.843 17:38:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:54.843 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:54.843 17:38:14 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:54.843 17:38:14 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2852590 00:28:54.843 17:38:14 -- spdkcli/common.sh@34 -- # waitforlisten 2852590 00:28:54.843 17:38:14 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:54.843 17:38:14 -- common/autotest_common.sh@829 -- # '[' -z 2852590 ']' 00:28:54.843 17:38:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.843 17:38:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:54.843 17:38:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.843 17:38:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:54.843 17:38:14 -- common/autotest_common.sh@10 -- # set +x 00:28:55.102 [2024-11-09 17:38:14.643525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:55.102 [2024-11-09 17:38:14.643575] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852590 ] 00:28:55.102 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.102 [2024-11-09 17:38:14.710290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:55.102 [2024-11-09 17:38:14.783756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:55.102 [2024-11-09 17:38:14.783892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.102 [2024-11-09 17:38:14.783895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.041 17:38:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.041 17:38:15 -- common/autotest_common.sh@862 -- # return 0 00:28:56.041 17:38:15 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:56.041 17:38:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.041 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:28:56.041 17:38:15 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:56.041 17:38:15 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:28:56.041 17:38:15 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:28:56.041 17:38:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:56.041 17:38:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.041 17:38:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:56.041 17:38:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:56.041 17:38:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:56.041 17:38:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.041 17:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:56.041 17:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.041 17:38:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:56.041 17:38:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:56.041 17:38:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:56.041 17:38:15 -- common/autotest_common.sh@10 -- # set +x 00:29:02.611 17:38:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:02.611 17:38:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:02.611 17:38:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:02.611 17:38:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:02.611 17:38:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:02.611 17:38:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:02.611 17:38:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:02.611 17:38:22 -- nvmf/common.sh@294 -- # net_devs=() 00:29:02.611 17:38:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:02.611 17:38:22 -- nvmf/common.sh@295 -- # e810=() 00:29:02.611 17:38:22 -- nvmf/common.sh@295 -- # local -ga e810 00:29:02.611 17:38:22 -- nvmf/common.sh@296 -- # x722=() 00:29:02.611 17:38:22 -- nvmf/common.sh@296 -- # local -ga x722 00:29:02.611 17:38:22 -- nvmf/common.sh@297 -- # mlx=() 00:29:02.611 17:38:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:02.611 17:38:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.611 17:38:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:02.611 17:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:02.611 17:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:02.611 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:02.611 17:38:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:02.611 17:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:02.611 17:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:02.611 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:02.611 17:38:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:02.611 17:38:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:02.611 17:38:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:02.611 17:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.611 17:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:02.611 17:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.611 17:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:02.611 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:02.611 17:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:02.611 17:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.611 17:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:02.611 17:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.611 17:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:02.611 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:02.611 17:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.611 17:38:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:02.611 17:38:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:02.611 17:38:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:02.611 17:38:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:02.611 17:38:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:02.611 17:38:22 -- nvmf/common.sh@57 -- # uname 00:29:02.611 17:38:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:02.611 17:38:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:02.611 17:38:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:02.611 17:38:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:02.611 17:38:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:02.611 17:38:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:02.611 17:38:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:02.611 17:38:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:02.611 17:38:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:02.611 17:38:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:02.611 17:38:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:02.611 17:38:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:02.611 17:38:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:02.611 17:38:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:02.612 17:38:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:02.612 17:38:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:02.612 17:38:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:02.612 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.612 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:02.612 17:38:22 -- nvmf/common.sh@104 -- # continue 2 00:29:02.612 17:38:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:02.612 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.612 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.612 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:02.612 17:38:22 -- nvmf/common.sh@104 -- # continue 2 00:29:02.612 17:38:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:02.612 17:38:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:02.612 17:38:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:02.612 17:38:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:02.612 17:38:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:02.612 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:02.612 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:02.612 altname enp217s0f0np0 00:29:02.612 altname ens818f0np0 00:29:02.612 inet 192.168.100.8/24 scope global mlx_0_0 00:29:02.612 valid_lft forever preferred_lft forever 00:29:02.612 17:38:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:02.612 17:38:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:02.612 17:38:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:02.612 17:38:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:02.612 17:38:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:02.612 17:38:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:02.612 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:02.612 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:02.612 altname enp217s0f1np1 00:29:02.612 altname ens818f1np1 00:29:02.612 inet 192.168.100.9/24 scope global mlx_0_1 00:29:02.612 valid_lft forever preferred_lft forever 00:29:02.612 17:38:22 -- nvmf/common.sh@410 -- # return 0 00:29:02.612 17:38:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:02.612 17:38:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:02.612 17:38:22 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:02.612 17:38:22 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:02.612 17:38:22 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:02.612 17:38:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:02.612 17:38:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:02.612 17:38:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:02.612 17:38:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:02.872 17:38:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:02.872 17:38:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:02.872 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.872 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:02.872 17:38:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:02.872 17:38:22 -- nvmf/common.sh@104 -- # continue 2 00:29:02.872 17:38:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:02.872 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.872 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:02.872 17:38:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:02.872 17:38:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:02.872 17:38:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:02.872 17:38:22 -- nvmf/common.sh@104 -- # continue 2 00:29:02.872 17:38:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:02.872 17:38:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:02.872 17:38:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:02.872 17:38:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:02.872 17:38:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:02.872 17:38:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:02.872 17:38:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:02.872 17:38:22 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:02.872 192.168.100.9' 00:29:02.872 17:38:22 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:02.872 192.168.100.9' 00:29:02.872 17:38:22 -- nvmf/common.sh@445 -- # head -n 1 00:29:02.872 17:38:22 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:02.872 17:38:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:02.872 192.168.100.9' 00:29:02.872 17:38:22 -- nvmf/common.sh@446 -- # tail -n +2 00:29:02.872 17:38:22 -- nvmf/common.sh@446 -- # head -n 1 00:29:02.872 17:38:22 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:02.872 17:38:22 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:02.872 17:38:22 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:02.872 17:38:22 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:02.872 17:38:22 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:02.872 17:38:22 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:02.872 17:38:22 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:02.872 17:38:22 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:02.872 17:38:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.872 17:38:22 -- common/autotest_common.sh@10 -- # set +x 00:29:02.872 17:38:22 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:02.872 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:02.872 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:02.872 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:02.872 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:02.872 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:02.872 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:02.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:02.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:02.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:02.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:02.872 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:02.872 ' 00:29:03.131 [2024-11-09 17:38:22.825658] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:05.665 [2024-11-09 17:38:24.887966] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d426e0/0x1d44a00) succeed. 00:29:05.665 [2024-11-09 17:38:24.898975] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d43dc0/0x1d860a0) succeed. 00:29:06.603 [2024-11-09 17:38:26.145380] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:09.141 [2024-11-09 17:38:28.336394] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:10.518 [2024-11-09 17:38:30.226766] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:12.422 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:12.422 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:12.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:12.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:12.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:12.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:12.422 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:12.422 17:38:31 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:12.422 17:38:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.422 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:29:12.422 17:38:31 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:12.422 17:38:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.422 17:38:31 -- common/autotest_common.sh@10 -- # set +x 00:29:12.422 17:38:31 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:12.422 17:38:31 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:12.681 17:38:32 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:12.681 17:38:32 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:12.681 17:38:32 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:12.681 17:38:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.681 17:38:32 -- common/autotest_common.sh@10 -- # set +x 00:29:12.681 17:38:32 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:12.681 17:38:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.681 17:38:32 -- common/autotest_common.sh@10 -- # set +x 00:29:12.681 17:38:32 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:12.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:12.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:12.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:12.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:12.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:12.681 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:12.681 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:12.681 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:12.681 ' 00:29:17.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:17.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:17.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:17.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:17.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:17.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:17.959 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:17.959 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:17.959 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:17.959 17:38:37 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:17.959 17:38:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.959 17:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:17.959 17:38:37 -- spdkcli/nvmf.sh@90 -- # killprocess 2852590 00:29:17.959 17:38:37 -- common/autotest_common.sh@936 -- # '[' -z 2852590 ']' 00:29:17.959 17:38:37 -- common/autotest_common.sh@940 -- # kill -0 2852590 00:29:17.959 17:38:37 -- common/autotest_common.sh@941 -- # uname 00:29:17.959 17:38:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.959 17:38:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2852590 00:29:17.959 17:38:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:17.959 17:38:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:17.959 17:38:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2852590' 00:29:17.959 killing process with pid 2852590 00:29:17.959 17:38:37 -- common/autotest_common.sh@955 -- # kill 2852590 00:29:17.959 [2024-11-09 17:38:37.435513] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:17.959 17:38:37 -- common/autotest_common.sh@960 -- # wait 2852590 00:29:17.959 17:38:37 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:17.959 17:38:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:17.959 17:38:37 -- nvmf/common.sh@116 -- # sync 00:29:17.959 17:38:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:17.959 17:38:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:17.959 17:38:37 -- nvmf/common.sh@119 -- # set +e 00:29:17.959 17:38:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:17.959 17:38:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:17.959 rmmod nvme_rdma 00:29:17.959 rmmod nvme_fabrics 00:29:17.959 17:38:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:17.959 17:38:37 -- nvmf/common.sh@123 -- # set -e 00:29:18.219 17:38:37 -- nvmf/common.sh@124 -- # return 0 00:29:18.219 17:38:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:18.219 17:38:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:18.219 17:38:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:18.219 00:29:18.219 real 0m23.358s 00:29:18.219 user 0m49.657s 00:29:18.219 sys 0m6.163s 00:29:18.219 17:38:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:18.219 17:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:18.219 ************************************ 00:29:18.219 END TEST spdkcli_nvmf_rdma 00:29:18.219 ************************************ 00:29:18.219 17:38:37 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:18.219 17:38:37 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:18.219 17:38:37 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:18.219 17:38:37 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:18.219 17:38:37 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:18.219 17:38:37 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:18.219 17:38:37 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:18.219 17:38:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.219 17:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:18.219 17:38:37 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:18.219 17:38:37 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:18.219 17:38:37 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:18.219 17:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:24.854 INFO: APP EXITING 00:29:24.854 INFO: killing all VMs 00:29:24.854 INFO: killing vhost app 00:29:24.854 INFO: EXIT DONE 00:29:27.391 Waiting for block devices as requested 00:29:27.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:27.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:27.650 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:27.650 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:27.650 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:27.910 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:27.910 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:27.910 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:27.910 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:28.169 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:28.169 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:28.169 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:28.429 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:28.429 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:28.429 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:28.688 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:28.688 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:32.885 Cleaning 00:29:32.885 Removing: /var/run/dpdk/spdk0/config 00:29:32.885 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:32.886 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:32.886 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:32.886 Removing: /var/run/dpdk/spdk1/config 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:32.886 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:32.886 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:32.886 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:32.886 Removing: /var/run/dpdk/spdk2/config 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:32.886 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:32.886 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:32.886 Removing: /var/run/dpdk/spdk3/config 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:32.886 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:32.886 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:32.886 Removing: /var/run/dpdk/spdk4/config 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:32.886 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:32.886 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:32.886 Removing: /dev/shm/bdevperf_trace.pid2681951 00:29:32.886 Removing: /dev/shm/bdevperf_trace.pid2776248 00:29:32.886 Removing: /dev/shm/bdev_svc_trace.1 00:29:32.886 Removing: /dev/shm/nvmf_trace.0 00:29:32.886 Removing: /dev/shm/spdk_tgt_trace.pid2517431 00:29:32.886 Removing: /var/run/dpdk/spdk0 00:29:32.886 Removing: /var/run/dpdk/spdk1 00:29:32.886 Removing: /var/run/dpdk/spdk2 00:29:32.886 Removing: /var/run/dpdk/spdk3 00:29:32.886 Removing: /var/run/dpdk/spdk4 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2514751 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2516032 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2517431 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2518090 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2523155 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2524771 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2525451 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2525846 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2526227 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2526585 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2526813 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2527102 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2527418 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2528299 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2531488 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2531916 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2532352 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2532368 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2532941 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2533200 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2533549 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2533790 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2534082 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2534315 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2534413 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2534665 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2535141 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2535340 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2535676 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2535983 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2536071 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2536313 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2536578 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2536778 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2536961 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2537186 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2537446 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2537737 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2538005 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2538286 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2538561 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2538844 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2539022 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2539249 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2539433 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2539704 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2539978 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2540264 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2540530 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2540819 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2541045 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2541270 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2541472 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2541695 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2541955 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2542245 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2542513 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2542799 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2543071 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2543366 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2543576 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2543805 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2543982 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2544228 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2544505 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2544789 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2545067 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2545357 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2545625 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2545871 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2546055 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2546263 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2546544 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2546892 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2551039 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2647880 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2652141 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2662847 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2668223 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2671931 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2672731 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2681951 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2682292 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2686556 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2693026 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2695798 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2706088 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2730901 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2734511 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2739723 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2774097 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2775177 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2776248 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2780403 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2788132 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2789125 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2790018 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2790991 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2791368 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2795920 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2795922 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2800495 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2801037 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2801603 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2802383 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2802392 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2804834 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2806727 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2808624 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2810579 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2812500 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2814450 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2820477 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2821094 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2823954 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2825177 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2832266 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2834995 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2840650 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2840917 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2846914 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2847319 00:29:32.886 Removing: /var/run/dpdk/spdk_pid2849323 00:29:32.887 Removing: /var/run/dpdk/spdk_pid2852590 00:29:32.887 Clean 00:29:33.146 killing process with pid 2465100 00:29:51.245 killing process with pid 2465097 00:29:51.245 killing process with pid 2465099 00:29:51.245 killing process with pid 2465098 00:29:51.245 17:39:09 -- common/autotest_common.sh@1446 -- # return 0 00:29:51.245 17:39:09 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:29:51.245 17:39:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.245 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:29:51.245 17:39:09 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:29:51.245 17:39:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.245 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:29:51.245 17:39:09 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:51.245 17:39:09 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:29:51.246 17:39:09 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:29:51.246 17:39:09 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:29:51.246 17:39:09 -- spdk/autotest.sh@383 -- # hostname 00:29:51.246 17:39:09 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:29:51.246 geninfo: WARNING: invalid characters removed from testname! 00:30:06.133 17:39:25 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:08.039 17:39:27 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:09.417 17:39:29 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:11.325 17:39:30 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:12.704 17:39:32 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:14.082 17:39:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:15.989 17:39:35 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:15.989 17:39:35 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:15.989 17:39:35 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:15.989 17:39:35 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:15.989 17:39:35 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:15.989 17:39:35 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:15.989 17:39:35 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:15.989 17:39:35 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:15.989 17:39:35 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:15.989 17:39:35 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:15.989 17:39:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:15.989 17:39:35 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:15.989 17:39:35 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:15.989 17:39:35 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:15.989 17:39:35 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:15.989 17:39:35 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:15.989 17:39:35 -- scripts/common.sh@343 -- $ case "$op" in 00:30:15.989 17:39:35 -- scripts/common.sh@344 -- $ : 1 00:30:15.989 17:39:35 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:15.989 17:39:35 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.989 17:39:35 -- scripts/common.sh@364 -- $ decimal 1 00:30:15.989 17:39:35 -- scripts/common.sh@352 -- $ local d=1 00:30:15.989 17:39:35 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:15.989 17:39:35 -- scripts/common.sh@354 -- $ echo 1 00:30:15.989 17:39:35 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:15.989 17:39:35 -- scripts/common.sh@365 -- $ decimal 2 00:30:15.989 17:39:35 -- scripts/common.sh@352 -- $ local d=2 00:30:15.989 17:39:35 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:15.989 17:39:35 -- scripts/common.sh@354 -- $ echo 2 00:30:15.989 17:39:35 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:15.989 17:39:35 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:15.989 17:39:35 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:15.989 17:39:35 -- scripts/common.sh@367 -- $ return 0 00:30:15.989 17:39:35 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.989 17:39:35 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:15.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.989 --rc genhtml_branch_coverage=1 00:30:15.989 --rc genhtml_function_coverage=1 00:30:15.989 --rc genhtml_legend=1 00:30:15.989 --rc geninfo_all_blocks=1 00:30:15.989 --rc geninfo_unexecuted_blocks=1 00:30:15.989 00:30:15.989 ' 00:30:15.989 17:39:35 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:15.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.989 --rc genhtml_branch_coverage=1 00:30:15.989 --rc genhtml_function_coverage=1 00:30:15.989 --rc genhtml_legend=1 00:30:15.989 --rc geninfo_all_blocks=1 00:30:15.989 --rc geninfo_unexecuted_blocks=1 00:30:15.989 00:30:15.989 ' 00:30:15.989 17:39:35 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:15.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.989 --rc genhtml_branch_coverage=1 00:30:15.989 --rc genhtml_function_coverage=1 00:30:15.989 --rc genhtml_legend=1 00:30:15.989 --rc geninfo_all_blocks=1 00:30:15.989 --rc geninfo_unexecuted_blocks=1 00:30:15.989 00:30:15.989 ' 00:30:15.989 17:39:35 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:15.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.989 --rc genhtml_branch_coverage=1 00:30:15.989 --rc genhtml_function_coverage=1 00:30:15.989 --rc genhtml_legend=1 00:30:15.989 --rc geninfo_all_blocks=1 00:30:15.989 --rc geninfo_unexecuted_blocks=1 00:30:15.989 00:30:15.989 ' 00:30:15.989 17:39:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:15.989 17:39:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:15.989 17:39:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.989 17:39:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.989 17:39:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.989 17:39:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.989 17:39:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.989 17:39:35 -- paths/export.sh@5 -- $ export PATH 00:30:15.989 17:39:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.989 17:39:35 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:15.989 17:39:35 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:15.989 17:39:35 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731170375.XXXXXX 00:30:15.989 17:39:35 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731170375.lTEY31 00:30:15.989 17:39:35 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:15.989 17:39:35 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:30:15.989 17:39:35 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:30:15.989 17:39:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:15.989 17:39:35 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:15.989 17:39:35 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:15.989 17:39:35 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:15.989 17:39:35 -- common/autotest_common.sh@10 -- $ set +x 00:30:15.989 17:39:35 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:15.990 17:39:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:15.990 17:39:35 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:15.990 17:39:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:15.990 17:39:35 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:15.990 17:39:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:15.990 17:39:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:15.990 17:39:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:15.990 17:39:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:15.990 17:39:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:15.990 17:39:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:15.990 + [[ -n 2422734 ]] 00:30:15.990 + sudo kill 2422734 00:30:16.000 [Pipeline] } 00:30:16.016 [Pipeline] // stage 00:30:16.021 [Pipeline] } 00:30:16.036 [Pipeline] // timeout 00:30:16.041 [Pipeline] } 00:30:16.055 [Pipeline] // catchError 00:30:16.060 [Pipeline] } 00:30:16.075 [Pipeline] // wrap 00:30:16.081 [Pipeline] } 00:30:16.094 [Pipeline] // catchError 00:30:16.103 [Pipeline] stage 00:30:16.106 [Pipeline] { (Epilogue) 00:30:16.119 [Pipeline] catchError 00:30:16.121 [Pipeline] { 00:30:16.133 [Pipeline] echo 00:30:16.135 Cleanup processes 00:30:16.141 [Pipeline] sh 00:30:16.428 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:16.428 2874461 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:16.442 [Pipeline] sh 00:30:16.728 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:16.728 ++ grep -v 'sudo pgrep' 00:30:16.728 ++ awk '{print $1}' 00:30:16.728 + sudo kill -9 00:30:16.728 + true 00:30:16.784 [Pipeline] sh 00:30:17.120 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:17.120 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:23.684 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:26.984 [Pipeline] sh 00:30:27.269 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:27.269 Artifacts sizes are good 00:30:27.284 [Pipeline] archiveArtifacts 00:30:27.291 Archiving artifacts 00:30:27.418 [Pipeline] sh 00:30:27.703 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:30:27.718 [Pipeline] cleanWs 00:30:27.729 [WS-CLEANUP] Deleting project workspace... 00:30:27.729 [WS-CLEANUP] Deferred wipeout is used... 00:30:27.736 [WS-CLEANUP] done 00:30:27.738 [Pipeline] } 00:30:27.755 [Pipeline] // catchError 00:30:27.767 [Pipeline] sh 00:30:28.051 + logger -p user.info -t JENKINS-CI 00:30:28.060 [Pipeline] } 00:30:28.074 [Pipeline] // stage 00:30:28.079 [Pipeline] } 00:30:28.093 [Pipeline] // node 00:30:28.098 [Pipeline] End of Pipeline 00:30:28.157 Finished: SUCCESS